00:00:00.001 Started by upstream project "autotest-per-patch" build number 132599 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.221 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.222 The recommended git tool is: git 00:00:00.222 using credential 00000000-0000-0000-0000-000000000002 00:00:00.224 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.256 Fetching changes from the remote Git repository 00:00:00.261 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.304 Using shallow fetch with depth 1 00:00:00.304 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.304 > git --version # timeout=10 00:00:00.350 > git --version # 'git version 2.39.2' 00:00:00.350 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.385 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.385 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.533 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.545 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.556 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.556 > git config core.sparsecheckout # timeout=10 00:00:04.567 > git read-tree -mu HEAD # timeout=10 00:00:04.580 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.599 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.599 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.669 [Pipeline] Start of Pipeline 00:00:04.680 [Pipeline] library 00:00:04.681 Loading library shm_lib@master 00:00:04.681 Library shm_lib@master is cached. Copying from home. 00:00:04.700 [Pipeline] node 00:00:04.728 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.731 [Pipeline] { 00:00:04.743 [Pipeline] catchError 00:00:04.744 [Pipeline] { 00:00:04.754 [Pipeline] wrap 00:00:04.762 [Pipeline] { 00:00:04.767 [Pipeline] stage 00:00:04.769 [Pipeline] { (Prologue) 00:00:04.962 [Pipeline] sh 00:00:05.240 + logger -p user.info -t JENKINS-CI 00:00:05.255 [Pipeline] echo 00:00:05.257 Node: WFP8 00:00:05.266 [Pipeline] sh 00:00:05.557 [Pipeline] setCustomBuildProperty 00:00:05.567 [Pipeline] echo 00:00:05.569 Cleanup processes 00:00:05.574 [Pipeline] sh 00:00:05.850 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.850 1719848 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.858 [Pipeline] sh 00:00:06.156 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.156 ++ grep -v 'sudo pgrep' 00:00:06.156 ++ awk '{print $1}' 00:00:06.156 + sudo kill -9 00:00:06.156 + true 00:00:06.170 [Pipeline] cleanWs 00:00:06.179 [WS-CLEANUP] Deleting project workspace... 00:00:06.179 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.186 [WS-CLEANUP] done 00:00:06.190 [Pipeline] setCustomBuildProperty 00:00:06.203 [Pipeline] sh 00:00:06.478 + sudo git config --global --replace-all safe.directory '*' 00:00:06.555 [Pipeline] httpRequest 00:00:06.900 [Pipeline] echo 00:00:06.901 Sorcerer 10.211.164.20 is alive 00:00:06.910 [Pipeline] retry 00:00:06.911 [Pipeline] { 00:00:06.920 [Pipeline] httpRequest 00:00:06.924 HttpMethod: GET 00:00:06.925 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.926 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.932 Response Code: HTTP/1.1 200 OK 00:00:06.932 Success: Status code 200 is in the accepted range: 200,404 00:00:06.932 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.687 [Pipeline] } 00:00:24.705 [Pipeline] // retry 00:00:24.712 [Pipeline] sh 00:00:24.995 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.011 [Pipeline] httpRequest 00:00:25.388 [Pipeline] echo 00:00:25.391 Sorcerer 10.211.164.20 is alive 00:00:25.420 [Pipeline] retry 00:00:25.424 [Pipeline] { 00:00:25.444 [Pipeline] httpRequest 00:00:25.449 HttpMethod: GET 00:00:25.449 URL: http://10.211.164.20/packages/spdk_0b658ecad42c394706e518249a916093968aa2b4.tar.gz 00:00:25.449 Sending request to url: http://10.211.164.20/packages/spdk_0b658ecad42c394706e518249a916093968aa2b4.tar.gz 00:00:25.455 Response Code: HTTP/1.1 200 OK 00:00:25.455 Success: Status code 200 is in the accepted range: 200,404 00:00:25.456 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_0b658ecad42c394706e518249a916093968aa2b4.tar.gz 00:03:37.825 [Pipeline] } 00:03:37.844 [Pipeline] // retry 00:03:37.852 [Pipeline] sh 00:03:38.135 + tar --no-same-owner -xf spdk_0b658ecad42c394706e518249a916093968aa2b4.tar.gz 00:03:40.734 [Pipeline] sh 00:03:41.016 + git -C spdk log --oneline -n5 00:03:41.016 0b658ecad bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:03:41.016 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:03:41.016 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:03:41.016 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:03:41.016 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:03:41.027 [Pipeline] } 00:03:41.038 [Pipeline] // stage 00:03:41.045 [Pipeline] stage 00:03:41.047 [Pipeline] { (Prepare) 00:03:41.060 [Pipeline] writeFile 00:03:41.073 [Pipeline] sh 00:03:41.353 + logger -p user.info -t JENKINS-CI 00:03:41.368 [Pipeline] sh 00:03:41.703 + logger -p user.info -t JENKINS-CI 00:03:41.715 [Pipeline] sh 00:03:42.002 + cat autorun-spdk.conf 00:03:42.002 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:42.002 SPDK_TEST_NVMF=1 00:03:42.002 SPDK_TEST_NVME_CLI=1 00:03:42.002 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:42.002 SPDK_TEST_NVMF_NICS=e810 00:03:42.002 SPDK_TEST_VFIOUSER=1 00:03:42.002 SPDK_RUN_UBSAN=1 00:03:42.002 NET_TYPE=phy 00:03:42.010 RUN_NIGHTLY=0 00:03:42.017 [Pipeline] readFile 00:03:42.046 [Pipeline] withEnv 00:03:42.048 [Pipeline] { 00:03:42.062 [Pipeline] sh 00:03:42.347 + set -ex 00:03:42.347 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:42.347 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:42.347 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:42.347 ++ SPDK_TEST_NVMF=1 00:03:42.347 ++ SPDK_TEST_NVME_CLI=1 00:03:42.347 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:42.347 ++ SPDK_TEST_NVMF_NICS=e810 00:03:42.347 ++ SPDK_TEST_VFIOUSER=1 00:03:42.347 ++ SPDK_RUN_UBSAN=1 00:03:42.347 ++ NET_TYPE=phy 00:03:42.347 ++ RUN_NIGHTLY=0 00:03:42.347 + case $SPDK_TEST_NVMF_NICS in 00:03:42.347 + DRIVERS=ice 00:03:42.347 + [[ tcp == \r\d\m\a ]] 00:03:42.347 + [[ -n ice ]] 00:03:42.347 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:42.347 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:42.347 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:42.347 rmmod: ERROR: Module irdma is not currently loaded 00:03:42.347 rmmod: ERROR: Module i40iw is not currently loaded 00:03:42.347 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:42.347 + true 00:03:42.347 + for D in $DRIVERS 00:03:42.347 + sudo modprobe ice 00:03:42.347 + exit 0 00:03:42.356 [Pipeline] } 00:03:42.372 [Pipeline] // withEnv 00:03:42.378 [Pipeline] } 00:03:42.393 [Pipeline] // stage 00:03:42.402 [Pipeline] catchError 00:03:42.404 [Pipeline] { 00:03:42.420 [Pipeline] timeout 00:03:42.420 Timeout set to expire in 1 hr 0 min 00:03:42.423 [Pipeline] { 00:03:42.439 [Pipeline] stage 00:03:42.442 [Pipeline] { (Tests) 00:03:42.455 [Pipeline] sh 00:03:42.736 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:42.736 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:42.736 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:42.736 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:42.736 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:42.736 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:42.736 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:42.736 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:42.736 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:42.736 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:42.736 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:42.736 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:42.736 + source /etc/os-release 00:03:42.736 ++ NAME='Fedora Linux' 00:03:42.736 ++ VERSION='39 (Cloud Edition)' 00:03:42.736 ++ ID=fedora 00:03:42.736 ++ VERSION_ID=39 00:03:42.736 ++ VERSION_CODENAME= 00:03:42.736 ++ PLATFORM_ID=platform:f39 00:03:42.736 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:42.736 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:42.736 ++ LOGO=fedora-logo-icon 00:03:42.736 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:42.736 ++ HOME_URL=https://fedoraproject.org/ 00:03:42.736 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:42.736 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:42.736 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:42.737 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:42.737 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:42.737 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:42.737 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:42.737 ++ SUPPORT_END=2024-11-12 00:03:42.737 ++ VARIANT='Cloud Edition' 00:03:42.737 ++ VARIANT_ID=cloud 00:03:42.737 + uname -a 00:03:42.737 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:42.737 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:45.274 Hugepages 00:03:45.274 node hugesize free / total 00:03:45.274 node0 1048576kB 0 / 0 00:03:45.274 node0 2048kB 0 / 0 00:03:45.274 node1 1048576kB 0 / 0 00:03:45.274 node1 2048kB 0 / 0 00:03:45.274 00:03:45.274 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.274 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:45.274 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:45.274 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:45.274 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:45.274 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:45.274 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:45.274 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:45.274 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:45.274 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:45.274 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:45.274 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:45.274 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:45.274 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:45.274 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:45.274 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:45.274 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:45.274 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:45.274 + rm -f /tmp/spdk-ld-path 00:03:45.274 + source autorun-spdk.conf 00:03:45.274 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:45.274 ++ SPDK_TEST_NVMF=1 00:03:45.274 ++ SPDK_TEST_NVME_CLI=1 00:03:45.274 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:45.274 ++ SPDK_TEST_NVMF_NICS=e810 00:03:45.274 ++ SPDK_TEST_VFIOUSER=1 00:03:45.274 ++ SPDK_RUN_UBSAN=1 00:03:45.274 ++ NET_TYPE=phy 00:03:45.274 ++ RUN_NIGHTLY=0 00:03:45.274 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:45.274 + [[ -n '' ]] 00:03:45.274 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:45.274 + for M in /var/spdk/build-*-manifest.txt 00:03:45.274 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:45.274 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:45.274 + for M in /var/spdk/build-*-manifest.txt 00:03:45.274 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:45.274 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:45.274 + for M in /var/spdk/build-*-manifest.txt 00:03:45.274 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:45.274 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:45.274 ++ uname 00:03:45.274 + [[ Linux == \L\i\n\u\x ]] 00:03:45.274 + sudo dmesg -T 00:03:45.274 + sudo dmesg --clear 00:03:45.274 + dmesg_pid=1721319 00:03:45.274 + [[ Fedora Linux == FreeBSD ]] 00:03:45.274 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:45.274 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:45.274 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:45.274 + [[ -x /usr/src/fio-static/fio ]] 00:03:45.274 + export FIO_BIN=/usr/src/fio-static/fio 00:03:45.274 + FIO_BIN=/usr/src/fio-static/fio 00:03:45.274 + sudo dmesg -Tw 00:03:45.274 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:45.274 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:45.274 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:45.274 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:45.274 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:45.274 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:45.274 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:45.274 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:45.274 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:45.274 12:47:45 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:45.274 12:47:45 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:45.274 12:47:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:45.274 12:47:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:45.274 12:47:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:45.274 12:47:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:45.274 12:47:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:45.274 12:47:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:45.274 12:47:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:45.274 12:47:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:45.274 12:47:45 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:45.274 12:47:45 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:45.274 12:47:45 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:45.274 12:47:45 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:45.274 12:47:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:45.274 12:47:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:45.274 12:47:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:45.274 12:47:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.274 12:47:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.274 12:47:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.274 12:47:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.275 12:47:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.275 12:47:45 -- paths/export.sh@5 -- $ export PATH 00:03:45.275 12:47:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.275 12:47:45 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:45.275 12:47:45 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:45.275 12:47:45 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732880865.XXXXXX 00:03:45.533 12:47:45 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732880865.zgEUl9 00:03:45.533 12:47:45 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:45.533 12:47:45 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:45.533 12:47:45 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:45.533 12:47:45 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:45.533 12:47:45 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:45.533 12:47:45 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:45.533 12:47:45 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:45.533 12:47:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.533 12:47:45 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:45.533 12:47:45 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:45.533 12:47:45 -- pm/common@17 -- $ local monitor 00:03:45.533 12:47:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.533 12:47:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.533 12:47:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.533 12:47:45 -- pm/common@21 -- $ date +%s 00:03:45.533 12:47:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.533 12:47:45 -- pm/common@21 -- $ date +%s 00:03:45.533 12:47:45 -- pm/common@25 -- $ sleep 1 00:03:45.533 12:47:45 -- pm/common@21 -- $ date +%s 00:03:45.533 12:47:45 -- pm/common@21 -- $ date +%s 00:03:45.533 12:47:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732880865 00:03:45.533 12:47:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732880865 00:03:45.533 12:47:45 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732880865 00:03:45.533 12:47:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732880865 00:03:45.533 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732880865_collect-vmstat.pm.log 00:03:45.533 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732880865_collect-cpu-load.pm.log 00:03:45.533 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732880865_collect-cpu-temp.pm.log 00:03:45.533 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732880865_collect-bmc-pm.bmc.pm.log 00:03:46.468 12:47:46 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:46.468 12:47:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:46.468 12:47:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:46.468 12:47:46 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:46.468 12:47:46 -- spdk/autobuild.sh@16 -- $ date -u 00:03:46.468 Fri Nov 29 11:47:46 AM UTC 2024 00:03:46.468 12:47:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:46.468 v25.01-pre-277-g0b658ecad 00:03:46.468 12:47:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:46.468 12:47:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:46.468 12:47:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:46.468 12:47:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:46.468 12:47:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:46.468 12:47:46 -- common/autotest_common.sh@10 -- $ set +x 00:03:46.468 ************************************ 00:03:46.468 START TEST ubsan 00:03:46.468 ************************************ 00:03:46.468 12:47:46 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:46.468 using ubsan 00:03:46.468 00:03:46.468 real 0m0.000s 00:03:46.468 user 0m0.000s 00:03:46.468 sys 0m0.000s 00:03:46.468 12:47:46 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:46.468 12:47:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:46.468 ************************************ 00:03:46.468 END TEST ubsan 00:03:46.468 ************************************ 00:03:46.468 12:47:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:46.468 12:47:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:46.468 12:47:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:46.468 12:47:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:46.468 12:47:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:46.468 12:47:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:46.468 12:47:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:46.468 12:47:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:46.468 12:47:46 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:46.725 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:46.726 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:46.984 Using 'verbs' RDMA provider 00:04:00.124 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:12.326 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:12.326 Creating mk/config.mk...done. 00:04:12.326 Creating mk/cc.flags.mk...done. 00:04:12.326 Type 'make' to build. 00:04:12.326 12:48:11 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:04:12.326 12:48:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:12.326 12:48:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:12.326 12:48:11 -- common/autotest_common.sh@10 -- $ set +x 00:04:12.326 ************************************ 00:04:12.326 START TEST make 00:04:12.326 ************************************ 00:04:12.326 12:48:11 make -- common/autotest_common.sh@1129 -- $ make -j96 00:04:12.326 make[1]: Nothing to be done for 'all'. 00:04:13.266 The Meson build system 00:04:13.266 Version: 1.5.0 00:04:13.266 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:13.266 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:13.266 Build type: native build 00:04:13.266 Project name: libvfio-user 00:04:13.266 Project version: 0.0.1 00:04:13.266 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:13.267 C linker for the host machine: cc ld.bfd 2.40-14 00:04:13.267 Host machine cpu family: x86_64 00:04:13.267 Host machine cpu: x86_64 00:04:13.267 Run-time dependency threads found: YES 00:04:13.267 Library dl found: YES 00:04:13.267 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:13.267 Run-time dependency json-c found: YES 0.17 00:04:13.267 Run-time dependency cmocka found: YES 1.1.7 00:04:13.267 Program pytest-3 found: NO 00:04:13.267 Program flake8 found: NO 00:04:13.267 Program misspell-fixer found: NO 00:04:13.267 Program restructuredtext-lint found: NO 00:04:13.267 Program valgrind found: YES (/usr/bin/valgrind) 00:04:13.267 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:13.267 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:13.267 Compiler for C supports arguments -Wwrite-strings: YES 00:04:13.267 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:13.267 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:13.267 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:13.267 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:13.267 Build targets in project: 8 00:04:13.267 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:13.267 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:13.267 00:04:13.267 libvfio-user 0.0.1 00:04:13.267 00:04:13.267 User defined options 00:04:13.267 buildtype : debug 00:04:13.267 default_library: shared 00:04:13.267 libdir : /usr/local/lib 00:04:13.267 00:04:13.267 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:13.834 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:13.834 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:13.834 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:13.834 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:13.834 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:13.834 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:13.834 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:13.834 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:13.834 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:13.834 [9/37] Compiling C object samples/null.p/null.c.o 00:04:13.834 [10/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:13.834 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:13.834 [12/37] Compiling C object samples/server.p/server.c.o 00:04:13.834 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:13.834 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:13.834 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:13.834 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:13.834 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:13.834 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:13.834 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:13.834 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:13.834 [21/37] Compiling C object samples/client.p/client.c.o 00:04:13.834 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:13.834 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:13.834 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:13.834 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:13.834 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:13.834 [27/37] Linking target samples/client 00:04:14.093 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:14.093 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:14.093 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:04:14.093 [31/37] Linking target test/unit_tests 00:04:14.093 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:14.093 [33/37] Linking target samples/server 00:04:14.093 [34/37] Linking target samples/shadow_ioeventfd_server 00:04:14.093 [35/37] Linking target samples/gpio-pci-idio-16 00:04:14.093 [36/37] Linking target samples/null 00:04:14.093 [37/37] Linking target samples/lspci 00:04:14.093 INFO: autodetecting backend as ninja 00:04:14.093 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:14.352 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:14.611 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:14.611 ninja: no work to do. 00:04:19.884 The Meson build system 00:04:19.884 Version: 1.5.0 00:04:19.884 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:19.884 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:19.884 Build type: native build 00:04:19.884 Program cat found: YES (/usr/bin/cat) 00:04:19.884 Project name: DPDK 00:04:19.884 Project version: 24.03.0 00:04:19.884 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:19.884 C linker for the host machine: cc ld.bfd 2.40-14 00:04:19.884 Host machine cpu family: x86_64 00:04:19.884 Host machine cpu: x86_64 00:04:19.884 Message: ## Building in Developer Mode ## 00:04:19.884 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:19.884 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:19.884 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:19.884 Program python3 found: YES (/usr/bin/python3) 00:04:19.884 Program cat found: YES (/usr/bin/cat) 00:04:19.884 Compiler for C supports arguments -march=native: YES 00:04:19.884 Checking for size of "void *" : 8 00:04:19.884 Checking for size of "void *" : 8 (cached) 00:04:19.884 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:19.884 Library m found: YES 00:04:19.884 Library numa found: YES 00:04:19.884 Has header "numaif.h" : YES 00:04:19.884 Library fdt found: NO 00:04:19.884 Library execinfo found: NO 00:04:19.884 Has header "execinfo.h" : YES 00:04:19.884 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:19.884 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:19.884 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:19.884 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:19.884 Run-time dependency openssl found: YES 3.1.1 00:04:19.884 Run-time dependency libpcap found: YES 1.10.4 00:04:19.884 Has header "pcap.h" with dependency libpcap: YES 00:04:19.884 Compiler for C supports arguments -Wcast-qual: YES 00:04:19.884 Compiler for C supports arguments -Wdeprecated: YES 00:04:19.884 Compiler for C supports arguments -Wformat: YES 00:04:19.884 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:19.884 Compiler for C supports arguments -Wformat-security: NO 00:04:19.884 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:19.884 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:19.884 Compiler for C supports arguments -Wnested-externs: YES 00:04:19.884 Compiler for C supports arguments -Wold-style-definition: YES 00:04:19.884 Compiler for C supports arguments -Wpointer-arith: YES 00:04:19.884 Compiler for C supports arguments -Wsign-compare: YES 00:04:19.884 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:19.884 Compiler for C supports arguments -Wundef: YES 00:04:19.884 Compiler for C supports arguments -Wwrite-strings: YES 00:04:19.884 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:19.884 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:19.884 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:19.884 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:19.884 Program objdump found: YES (/usr/bin/objdump) 00:04:19.884 Compiler for C supports arguments -mavx512f: YES 00:04:19.884 Checking if "AVX512 checking" compiles: YES 00:04:19.884 Fetching value of define "__SSE4_2__" : 1 00:04:19.884 Fetching value of define "__AES__" : 1 00:04:19.884 Fetching value of define "__AVX__" : 1 00:04:19.884 Fetching value of define "__AVX2__" : 1 00:04:19.884 Fetching value of define "__AVX512BW__" : 1 00:04:19.884 Fetching value of define "__AVX512CD__" : 1 00:04:19.884 Fetching value of define "__AVX512DQ__" : 1 00:04:19.884 Fetching value of define "__AVX512F__" : 1 00:04:19.884 Fetching value of define "__AVX512VL__" : 1 00:04:19.884 Fetching value of define "__PCLMUL__" : 1 00:04:19.884 Fetching value of define "__RDRND__" : 1 00:04:19.884 Fetching value of define "__RDSEED__" : 1 00:04:19.884 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:19.884 Fetching value of define "__znver1__" : (undefined) 00:04:19.884 Fetching value of define "__znver2__" : (undefined) 00:04:19.884 Fetching value of define "__znver3__" : (undefined) 00:04:19.884 Fetching value of define "__znver4__" : (undefined) 00:04:19.884 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:19.884 Message: lib/log: Defining dependency "log" 00:04:19.884 Message: lib/kvargs: Defining dependency "kvargs" 00:04:19.884 Message: lib/telemetry: Defining dependency "telemetry" 00:04:19.884 Checking for function "getentropy" : NO 00:04:19.884 Message: lib/eal: Defining dependency "eal" 00:04:19.884 Message: lib/ring: Defining dependency "ring" 00:04:19.884 Message: lib/rcu: Defining dependency "rcu" 00:04:19.884 Message: lib/mempool: Defining dependency "mempool" 00:04:19.884 Message: lib/mbuf: Defining dependency "mbuf" 00:04:19.884 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:19.884 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:19.885 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:19.885 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:19.885 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:19.885 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:19.885 Compiler for C supports arguments -mpclmul: YES 00:04:19.885 Compiler for C supports arguments -maes: YES 00:04:19.885 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:19.885 Compiler for C supports arguments -mavx512bw: YES 00:04:19.885 Compiler for C supports arguments -mavx512dq: YES 00:04:19.885 Compiler for C supports arguments -mavx512vl: YES 00:04:19.885 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:19.885 Compiler for C supports arguments -mavx2: YES 00:04:19.885 Compiler for C supports arguments -mavx: YES 00:04:19.885 Message: lib/net: Defining dependency "net" 00:04:19.885 Message: lib/meter: Defining dependency "meter" 00:04:19.885 Message: lib/ethdev: Defining dependency "ethdev" 00:04:19.885 Message: lib/pci: Defining dependency "pci" 00:04:19.885 Message: lib/cmdline: Defining dependency "cmdline" 00:04:19.885 Message: lib/hash: Defining dependency "hash" 00:04:19.885 Message: lib/timer: Defining dependency "timer" 00:04:19.885 Message: lib/compressdev: Defining dependency "compressdev" 00:04:19.885 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:19.885 Message: lib/dmadev: Defining dependency "dmadev" 00:04:19.885 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:19.885 Message: lib/power: Defining dependency "power" 00:04:19.885 Message: lib/reorder: Defining dependency "reorder" 00:04:19.885 Message: lib/security: Defining dependency "security" 00:04:19.885 Has header "linux/userfaultfd.h" : YES 00:04:19.885 Has header "linux/vduse.h" : YES 00:04:19.885 Message: lib/vhost: Defining dependency "vhost" 00:04:19.885 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:19.885 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:19.885 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:19.885 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:19.885 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:19.885 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:19.885 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:19.885 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:19.885 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:19.885 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:19.885 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:19.885 Configuring doxy-api-html.conf using configuration 00:04:19.885 Configuring doxy-api-man.conf using configuration 00:04:19.885 Program mandb found: YES (/usr/bin/mandb) 00:04:19.885 Program sphinx-build found: NO 00:04:19.885 Configuring rte_build_config.h using configuration 00:04:19.885 Message: 00:04:19.885 ================= 00:04:19.885 Applications Enabled 00:04:19.885 ================= 00:04:19.885 00:04:19.885 apps: 00:04:19.885 00:04:19.885 00:04:19.885 Message: 00:04:19.885 ================= 00:04:19.885 Libraries Enabled 00:04:19.885 ================= 00:04:19.885 00:04:19.885 libs: 00:04:19.885 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:19.885 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:19.885 cryptodev, dmadev, power, reorder, security, vhost, 00:04:19.885 00:04:19.885 Message: 00:04:19.885 =============== 00:04:19.885 Drivers Enabled 00:04:19.885 =============== 00:04:19.885 00:04:19.885 common: 00:04:19.885 00:04:19.885 bus: 00:04:19.885 pci, vdev, 00:04:19.885 mempool: 00:04:19.885 ring, 00:04:19.885 dma: 00:04:19.885 00:04:19.885 net: 00:04:19.885 00:04:19.885 crypto: 00:04:19.885 00:04:19.885 compress: 00:04:19.885 00:04:19.885 vdpa: 00:04:19.885 00:04:19.885 00:04:19.885 Message: 00:04:19.885 ================= 00:04:19.885 Content Skipped 00:04:19.885 ================= 00:04:19.885 00:04:19.885 apps: 00:04:19.885 dumpcap: explicitly disabled via build config 00:04:19.885 graph: explicitly disabled via build config 00:04:19.885 pdump: explicitly disabled via build config 00:04:19.885 proc-info: explicitly disabled via build config 00:04:19.885 test-acl: explicitly disabled via build config 00:04:19.885 test-bbdev: explicitly disabled via build config 00:04:19.885 test-cmdline: explicitly disabled via build config 00:04:19.885 test-compress-perf: explicitly disabled via build config 00:04:19.885 test-crypto-perf: explicitly disabled via build config 00:04:19.885 test-dma-perf: explicitly disabled via build config 00:04:19.885 test-eventdev: explicitly disabled via build config 00:04:19.885 test-fib: explicitly disabled via build config 00:04:19.885 test-flow-perf: explicitly disabled via build config 00:04:19.885 test-gpudev: explicitly disabled via build config 00:04:19.885 test-mldev: explicitly disabled via build config 00:04:19.885 test-pipeline: explicitly disabled via build config 00:04:19.885 test-pmd: explicitly disabled via build config 00:04:19.885 test-regex: explicitly disabled via build config 00:04:19.885 test-sad: explicitly disabled via build config 00:04:19.885 test-security-perf: explicitly disabled via build config 00:04:19.885 00:04:19.885 libs: 00:04:19.885 argparse: explicitly disabled via build config 00:04:19.885 metrics: explicitly disabled via build config 00:04:19.885 acl: explicitly disabled via build config 00:04:19.885 bbdev: explicitly disabled via build config 00:04:19.885 bitratestats: explicitly disabled via build config 00:04:19.885 bpf: explicitly disabled via build config 00:04:19.885 cfgfile: explicitly disabled via build config 00:04:19.885 distributor: explicitly disabled via build config 00:04:19.885 efd: explicitly disabled via build config 00:04:19.885 eventdev: explicitly disabled via build config 00:04:19.885 dispatcher: explicitly disabled via build config 00:04:19.885 gpudev: explicitly disabled via build config 00:04:19.885 gro: explicitly disabled via build config 00:04:19.885 gso: explicitly disabled via build config 00:04:19.885 ip_frag: explicitly disabled via build config 00:04:19.885 jobstats: explicitly disabled via build config 00:04:19.885 latencystats: explicitly disabled via build config 00:04:19.885 lpm: explicitly disabled via build config 00:04:19.885 member: explicitly disabled via build config 00:04:19.885 pcapng: explicitly disabled via build config 00:04:19.885 rawdev: explicitly disabled via build config 00:04:19.885 regexdev: explicitly disabled via build config 00:04:19.885 mldev: explicitly disabled via build config 00:04:19.885 rib: explicitly disabled via build config 00:04:19.885 sched: explicitly disabled via build config 00:04:19.885 stack: explicitly disabled via build config 00:04:19.885 ipsec: explicitly disabled via build config 00:04:19.885 pdcp: explicitly disabled via build config 00:04:19.885 fib: explicitly disabled via build config 00:04:19.885 port: explicitly disabled via build config 00:04:19.885 pdump: explicitly disabled via build config 00:04:19.885 table: explicitly disabled via build config 00:04:19.885 pipeline: explicitly disabled via build config 00:04:19.885 graph: explicitly disabled via build config 00:04:19.885 node: explicitly disabled via build config 00:04:19.885 00:04:19.885 drivers: 00:04:19.885 common/cpt: not in enabled drivers build config 00:04:19.885 common/dpaax: not in enabled drivers build config 00:04:19.885 common/iavf: not in enabled drivers build config 00:04:19.885 common/idpf: not in enabled drivers build config 00:04:19.885 common/ionic: not in enabled drivers build config 00:04:19.885 common/mvep: not in enabled drivers build config 00:04:19.885 common/octeontx: not in enabled drivers build config 00:04:19.885 bus/auxiliary: not in enabled drivers build config 00:04:19.885 bus/cdx: not in enabled drivers build config 00:04:19.885 bus/dpaa: not in enabled drivers build config 00:04:19.885 bus/fslmc: not in enabled drivers build config 00:04:19.885 bus/ifpga: not in enabled drivers build config 00:04:19.885 bus/platform: not in enabled drivers build config 00:04:19.885 bus/uacce: not in enabled drivers build config 00:04:19.885 bus/vmbus: not in enabled drivers build config 00:04:19.885 common/cnxk: not in enabled drivers build config 00:04:19.885 common/mlx5: not in enabled drivers build config 00:04:19.885 common/nfp: not in enabled drivers build config 00:04:19.885 common/nitrox: not in enabled drivers build config 00:04:19.885 common/qat: not in enabled drivers build config 00:04:19.885 common/sfc_efx: not in enabled drivers build config 00:04:19.885 mempool/bucket: not in enabled drivers build config 00:04:19.885 mempool/cnxk: not in enabled drivers build config 00:04:19.885 mempool/dpaa: not in enabled drivers build config 00:04:19.885 mempool/dpaa2: not in enabled drivers build config 00:04:19.885 mempool/octeontx: not in enabled drivers build config 00:04:19.885 mempool/stack: not in enabled drivers build config 00:04:19.885 dma/cnxk: not in enabled drivers build config 00:04:19.885 dma/dpaa: not in enabled drivers build config 00:04:19.885 dma/dpaa2: not in enabled drivers build config 00:04:19.885 dma/hisilicon: not in enabled drivers build config 00:04:19.885 dma/idxd: not in enabled drivers build config 00:04:19.885 dma/ioat: not in enabled drivers build config 00:04:19.885 dma/skeleton: not in enabled drivers build config 00:04:19.885 net/af_packet: not in enabled drivers build config 00:04:19.885 net/af_xdp: not in enabled drivers build config 00:04:19.885 net/ark: not in enabled drivers build config 00:04:19.885 net/atlantic: not in enabled drivers build config 00:04:19.885 net/avp: not in enabled drivers build config 00:04:19.885 net/axgbe: not in enabled drivers build config 00:04:19.885 net/bnx2x: not in enabled drivers build config 00:04:19.885 net/bnxt: not in enabled drivers build config 00:04:19.885 net/bonding: not in enabled drivers build config 00:04:19.885 net/cnxk: not in enabled drivers build config 00:04:19.885 net/cpfl: not in enabled drivers build config 00:04:19.885 net/cxgbe: not in enabled drivers build config 00:04:19.885 net/dpaa: not in enabled drivers build config 00:04:19.885 net/dpaa2: not in enabled drivers build config 00:04:19.885 net/e1000: not in enabled drivers build config 00:04:19.885 net/ena: not in enabled drivers build config 00:04:19.885 net/enetc: not in enabled drivers build config 00:04:19.886 net/enetfec: not in enabled drivers build config 00:04:19.886 net/enic: not in enabled drivers build config 00:04:19.886 net/failsafe: not in enabled drivers build config 00:04:19.886 net/fm10k: not in enabled drivers build config 00:04:19.886 net/gve: not in enabled drivers build config 00:04:19.886 net/hinic: not in enabled drivers build config 00:04:19.886 net/hns3: not in enabled drivers build config 00:04:19.886 net/i40e: not in enabled drivers build config 00:04:19.886 net/iavf: not in enabled drivers build config 00:04:19.886 net/ice: not in enabled drivers build config 00:04:19.886 net/idpf: not in enabled drivers build config 00:04:19.886 net/igc: not in enabled drivers build config 00:04:19.886 net/ionic: not in enabled drivers build config 00:04:19.886 net/ipn3ke: not in enabled drivers build config 00:04:19.886 net/ixgbe: not in enabled drivers build config 00:04:19.886 net/mana: not in enabled drivers build config 00:04:19.886 net/memif: not in enabled drivers build config 00:04:19.886 net/mlx4: not in enabled drivers build config 00:04:19.886 net/mlx5: not in enabled drivers build config 00:04:19.886 net/mvneta: not in enabled drivers build config 00:04:19.886 net/mvpp2: not in enabled drivers build config 00:04:19.886 net/netvsc: not in enabled drivers build config 00:04:19.886 net/nfb: not in enabled drivers build config 00:04:19.886 net/nfp: not in enabled drivers build config 00:04:19.886 net/ngbe: not in enabled drivers build config 00:04:19.886 net/null: not in enabled drivers build config 00:04:19.886 net/octeontx: not in enabled drivers build config 00:04:19.886 net/octeon_ep: not in enabled drivers build config 00:04:19.886 net/pcap: not in enabled drivers build config 00:04:19.886 net/pfe: not in enabled drivers build config 00:04:19.886 net/qede: not in enabled drivers build config 00:04:19.886 net/ring: not in enabled drivers build config 00:04:19.886 net/sfc: not in enabled drivers build config 00:04:19.886 net/softnic: not in enabled drivers build config 00:04:19.886 net/tap: not in enabled drivers build config 00:04:19.886 net/thunderx: not in enabled drivers build config 00:04:19.886 net/txgbe: not in enabled drivers build config 00:04:19.886 net/vdev_netvsc: not in enabled drivers build config 00:04:19.886 net/vhost: not in enabled drivers build config 00:04:19.886 net/virtio: not in enabled drivers build config 00:04:19.886 net/vmxnet3: not in enabled drivers build config 00:04:19.886 raw/*: missing internal dependency, "rawdev" 00:04:19.886 crypto/armv8: not in enabled drivers build config 00:04:19.886 crypto/bcmfs: not in enabled drivers build config 00:04:19.886 crypto/caam_jr: not in enabled drivers build config 00:04:19.886 crypto/ccp: not in enabled drivers build config 00:04:19.886 crypto/cnxk: not in enabled drivers build config 00:04:19.886 crypto/dpaa_sec: not in enabled drivers build config 00:04:19.886 crypto/dpaa2_sec: not in enabled drivers build config 00:04:19.886 crypto/ipsec_mb: not in enabled drivers build config 00:04:19.886 crypto/mlx5: not in enabled drivers build config 00:04:19.886 crypto/mvsam: not in enabled drivers build config 00:04:19.886 crypto/nitrox: not in enabled drivers build config 00:04:19.886 crypto/null: not in enabled drivers build config 00:04:19.886 crypto/octeontx: not in enabled drivers build config 00:04:19.886 crypto/openssl: not in enabled drivers build config 00:04:19.886 crypto/scheduler: not in enabled drivers build config 00:04:19.886 crypto/uadk: not in enabled drivers build config 00:04:19.886 crypto/virtio: not in enabled drivers build config 00:04:19.886 compress/isal: not in enabled drivers build config 00:04:19.886 compress/mlx5: not in enabled drivers build config 00:04:19.886 compress/nitrox: not in enabled drivers build config 00:04:19.886 compress/octeontx: not in enabled drivers build config 00:04:19.886 compress/zlib: not in enabled drivers build config 00:04:19.886 regex/*: missing internal dependency, "regexdev" 00:04:19.886 ml/*: missing internal dependency, "mldev" 00:04:19.886 vdpa/ifc: not in enabled drivers build config 00:04:19.886 vdpa/mlx5: not in enabled drivers build config 00:04:19.886 vdpa/nfp: not in enabled drivers build config 00:04:19.886 vdpa/sfc: not in enabled drivers build config 00:04:19.886 event/*: missing internal dependency, "eventdev" 00:04:19.886 baseband/*: missing internal dependency, "bbdev" 00:04:19.886 gpu/*: missing internal dependency, "gpudev" 00:04:19.886 00:04:19.886 00:04:19.886 Build targets in project: 85 00:04:19.886 00:04:19.886 DPDK 24.03.0 00:04:19.886 00:04:19.886 User defined options 00:04:19.886 buildtype : debug 00:04:19.886 default_library : shared 00:04:19.886 libdir : lib 00:04:19.886 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:19.886 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:19.886 c_link_args : 00:04:19.886 cpu_instruction_set: native 00:04:19.886 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:04:19.886 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:04:19.886 enable_docs : false 00:04:19.886 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:19.886 enable_kmods : false 00:04:19.886 max_lcores : 128 00:04:19.886 tests : false 00:04:19.886 00:04:19.886 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:20.160 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:20.160 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:20.160 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:20.160 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:20.418 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:20.418 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:20.418 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:20.418 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:20.418 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:20.418 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:20.418 [10/268] Linking static target lib/librte_kvargs.a 00:04:20.418 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:20.418 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:20.418 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:20.418 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:20.418 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:20.418 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:20.418 [17/268] Linking static target lib/librte_log.a 00:04:20.418 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:20.418 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:20.418 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:20.418 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:20.418 [22/268] Linking static target lib/librte_pci.a 00:04:20.675 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:20.675 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:20.675 [25/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:20.675 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:20.675 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:20.675 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:20.675 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:20.675 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:20.675 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:20.675 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:20.675 [33/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:20.675 [34/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:20.675 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:20.675 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:20.675 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:20.675 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:20.675 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:20.675 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:20.675 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:20.675 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:20.675 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:20.675 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:20.936 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:20.936 [46/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:20.936 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:20.937 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:20.937 [49/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:20.937 [50/268] Linking static target lib/librte_meter.a 00:04:20.937 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:20.937 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:20.937 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:20.937 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:20.937 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:20.937 [56/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:20.937 [57/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:20.937 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:20.937 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:20.937 [60/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:20.937 [61/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:20.937 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:20.937 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:20.937 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:20.937 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:20.937 [66/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:20.937 [67/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:20.937 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:20.937 [69/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:20.937 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:20.937 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:20.937 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:20.937 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:20.937 [74/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:20.937 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:20.937 [76/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.937 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:20.937 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:20.937 [79/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:20.937 [80/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:20.937 [81/268] Linking static target lib/librte_telemetry.a 00:04:20.937 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:20.937 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:20.937 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:20.937 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:20.937 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:20.937 [87/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:20.937 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:20.937 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:20.937 [90/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:20.937 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:20.937 [92/268] Linking static target lib/librte_ring.a 00:04:20.937 [93/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:20.937 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:20.937 [95/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:20.937 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:20.937 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:20.937 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:20.937 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:20.937 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:20.937 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:20.937 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:20.937 [103/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:20.937 [104/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:20.937 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:20.937 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:20.937 [107/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:20.937 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:20.937 [109/268] Linking static target lib/librte_rcu.a 00:04:20.937 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:20.937 [111/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.937 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:20.937 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:20.937 [114/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:20.937 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:20.937 [116/268] Linking static target lib/librte_mempool.a 00:04:20.937 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:20.937 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:20.937 [119/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:20.937 [120/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:20.937 [121/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:20.937 [122/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:20.937 [123/268] Linking static target lib/librte_cmdline.a 00:04:20.937 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:21.196 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:21.196 [126/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:21.196 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:21.196 [128/268] Linking static target lib/librte_net.a 00:04:21.196 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:21.196 [130/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:21.196 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:21.196 [132/268] Linking static target lib/librte_eal.a 00:04:21.196 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:21.196 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.196 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:21.196 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:21.196 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.196 [138/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:21.196 [139/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:21.196 [140/268] Linking target lib/librte_log.so.24.1 00:04:21.196 [141/268] Linking static target lib/librte_mbuf.a 00:04:21.196 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:21.196 [143/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.196 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:21.197 [145/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:21.197 [146/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:21.197 [147/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:21.197 [148/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:21.197 [149/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:21.197 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.197 [151/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:21.197 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:21.197 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:21.197 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:21.197 [155/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:21.197 [156/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:21.197 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:21.456 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:21.456 [159/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:21.456 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:21.456 [161/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.456 [162/268] Linking static target lib/librte_timer.a 00:04:21.456 [163/268] Linking target lib/librte_kvargs.so.24.1 00:04:21.456 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:21.456 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:21.456 [166/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.456 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:21.456 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:21.456 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:21.456 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:21.456 [171/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:21.456 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:21.457 [173/268] Linking static target lib/librte_dmadev.a 00:04:21.457 [174/268] Linking static target lib/librte_reorder.a 00:04:21.457 [175/268] Linking target lib/librte_telemetry.so.24.1 00:04:21.457 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:21.457 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:21.457 [178/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:21.457 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:21.457 [180/268] Linking static target lib/librte_security.a 00:04:21.457 [181/268] Linking static target lib/librte_compressdev.a 00:04:21.457 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:21.457 [183/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:21.457 [184/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:21.457 [185/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:21.457 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:21.457 [187/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:21.457 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:21.457 [189/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:21.457 [190/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:21.457 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:21.457 [192/268] Linking static target drivers/librte_bus_vdev.a 00:04:21.457 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:21.457 [194/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:21.457 [195/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:21.457 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:21.457 [197/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:21.457 [198/268] Linking static target lib/librte_power.a 00:04:21.457 [199/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:21.457 [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:21.715 [201/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.715 [202/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:21.715 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:21.715 [204/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:21.715 [205/268] Linking static target lib/librte_cryptodev.a 00:04:21.715 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:21.715 [207/268] Linking static target lib/librte_hash.a 00:04:21.715 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:21.715 [209/268] Linking static target drivers/librte_bus_pci.a 00:04:21.715 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:21.715 [211/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.715 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:21.715 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:21.715 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.715 [215/268] Linking static target drivers/librte_mempool_ring.a 00:04:21.973 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.973 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.973 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.973 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.973 [220/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:21.973 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.231 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:22.231 [223/268] Linking static target lib/librte_ethdev.a 00:04:22.231 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:22.487 [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.487 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:22.487 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.422 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:23.422 [229/268] Linking static target lib/librte_vhost.a 00:04:23.422 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.325 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.637 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.637 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.637 [234/268] Linking target lib/librte_eal.so.24.1 00:04:30.637 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:30.896 [236/268] Linking target lib/librte_pci.so.24.1 00:04:30.896 [237/268] Linking target lib/librte_timer.so.24.1 00:04:30.896 [238/268] Linking target lib/librte_ring.so.24.1 00:04:30.896 [239/268] Linking target lib/librte_meter.so.24.1 00:04:30.896 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:30.896 [241/268] Linking target lib/librte_dmadev.so.24.1 00:04:30.896 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:30.896 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:30.896 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:30.896 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:30.896 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:30.896 [247/268] Linking target lib/librte_rcu.so.24.1 00:04:30.896 [248/268] Linking target lib/librte_mempool.so.24.1 00:04:30.896 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:31.154 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:31.154 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:31.154 [252/268] Linking target lib/librte_mbuf.so.24.1 00:04:31.154 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:31.154 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:31.413 [255/268] Linking target lib/librte_net.so.24.1 00:04:31.413 [256/268] Linking target lib/librte_compressdev.so.24.1 00:04:31.413 [257/268] Linking target lib/librte_reorder.so.24.1 00:04:31.413 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:31.413 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:31.413 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:31.413 [261/268] Linking target lib/librte_hash.so.24.1 00:04:31.413 [262/268] Linking target lib/librte_cmdline.so.24.1 00:04:31.413 [263/268] Linking target lib/librte_security.so.24.1 00:04:31.413 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:31.671 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:31.671 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:31.671 [267/268] Linking target lib/librte_power.so.24.1 00:04:31.671 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:31.671 INFO: autodetecting backend as ninja 00:04:31.671 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:04:43.881 CC lib/ut/ut.o 00:04:43.881 CC lib/ut_mock/mock.o 00:04:43.881 CC lib/log/log.o 00:04:43.881 CC lib/log/log_deprecated.o 00:04:43.881 CC lib/log/log_flags.o 00:04:43.881 LIB libspdk_ut.a 00:04:43.881 SO libspdk_ut.so.2.0 00:04:43.881 LIB libspdk_ut_mock.a 00:04:43.881 LIB libspdk_log.a 00:04:43.881 SO libspdk_ut_mock.so.6.0 00:04:43.881 SYMLINK libspdk_ut.so 00:04:43.881 SO libspdk_log.so.7.1 00:04:43.881 SYMLINK libspdk_ut_mock.so 00:04:43.881 SYMLINK libspdk_log.so 00:04:43.881 CXX lib/trace_parser/trace.o 00:04:43.881 CC lib/dma/dma.o 00:04:43.881 CC lib/ioat/ioat.o 00:04:43.881 CC lib/util/base64.o 00:04:43.881 CC lib/util/bit_array.o 00:04:43.881 CC lib/util/cpuset.o 00:04:43.881 CC lib/util/crc16.o 00:04:43.881 CC lib/util/crc32.o 00:04:43.881 CC lib/util/crc32c.o 00:04:43.881 CC lib/util/crc32_ieee.o 00:04:43.881 CC lib/util/crc64.o 00:04:43.881 CC lib/util/dif.o 00:04:43.881 CC lib/util/file.o 00:04:43.881 CC lib/util/fd.o 00:04:43.881 CC lib/util/fd_group.o 00:04:43.881 CC lib/util/hexlify.o 00:04:43.881 CC lib/util/iov.o 00:04:43.881 CC lib/util/math.o 00:04:43.881 CC lib/util/net.o 00:04:43.881 CC lib/util/strerror_tls.o 00:04:43.881 CC lib/util/pipe.o 00:04:43.881 CC lib/util/uuid.o 00:04:43.881 CC lib/util/string.o 00:04:43.881 CC lib/util/xor.o 00:04:43.881 CC lib/util/zipf.o 00:04:43.881 CC lib/util/md5.o 00:04:43.881 CC lib/vfio_user/host/vfio_user_pci.o 00:04:43.881 CC lib/vfio_user/host/vfio_user.o 00:04:43.881 LIB libspdk_dma.a 00:04:43.881 SO libspdk_dma.so.5.0 00:04:43.881 LIB libspdk_ioat.a 00:04:43.881 SYMLINK libspdk_dma.so 00:04:43.881 SO libspdk_ioat.so.7.0 00:04:43.881 SYMLINK libspdk_ioat.so 00:04:43.881 LIB libspdk_vfio_user.a 00:04:43.881 SO libspdk_vfio_user.so.5.0 00:04:43.881 SYMLINK libspdk_vfio_user.so 00:04:43.881 LIB libspdk_util.a 00:04:43.881 SO libspdk_util.so.10.1 00:04:44.140 SYMLINK libspdk_util.so 00:04:44.140 LIB libspdk_trace_parser.a 00:04:44.140 SO libspdk_trace_parser.so.6.0 00:04:44.140 SYMLINK libspdk_trace_parser.so 00:04:44.399 CC lib/idxd/idxd.o 00:04:44.399 CC lib/idxd/idxd_user.o 00:04:44.399 CC lib/idxd/idxd_kernel.o 00:04:44.399 CC lib/conf/conf.o 00:04:44.399 CC lib/vmd/vmd.o 00:04:44.399 CC lib/vmd/led.o 00:04:44.399 CC lib/rdma_utils/rdma_utils.o 00:04:44.399 CC lib/env_dpdk/env.o 00:04:44.399 CC lib/env_dpdk/memory.o 00:04:44.399 CC lib/env_dpdk/pci.o 00:04:44.399 CC lib/env_dpdk/init.o 00:04:44.399 CC lib/env_dpdk/threads.o 00:04:44.399 CC lib/env_dpdk/pci_ioat.o 00:04:44.399 CC lib/env_dpdk/pci_virtio.o 00:04:44.399 CC lib/env_dpdk/pci_vmd.o 00:04:44.399 CC lib/env_dpdk/pci_event.o 00:04:44.399 CC lib/env_dpdk/pci_idxd.o 00:04:44.399 CC lib/env_dpdk/sigbus_handler.o 00:04:44.399 CC lib/env_dpdk/pci_dpdk.o 00:04:44.399 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:44.399 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:44.399 CC lib/json/json_parse.o 00:04:44.399 CC lib/json/json_util.o 00:04:44.399 CC lib/json/json_write.o 00:04:44.657 LIB libspdk_conf.a 00:04:44.657 SO libspdk_conf.so.6.0 00:04:44.658 LIB libspdk_rdma_utils.a 00:04:44.658 LIB libspdk_json.a 00:04:44.658 SO libspdk_rdma_utils.so.1.0 00:04:44.658 SYMLINK libspdk_conf.so 00:04:44.658 SO libspdk_json.so.6.0 00:04:44.658 SYMLINK libspdk_rdma_utils.so 00:04:44.658 SYMLINK libspdk_json.so 00:04:44.916 LIB libspdk_idxd.a 00:04:44.916 SO libspdk_idxd.so.12.1 00:04:44.916 LIB libspdk_vmd.a 00:04:44.916 SO libspdk_vmd.so.6.0 00:04:44.916 SYMLINK libspdk_idxd.so 00:04:44.916 SYMLINK libspdk_vmd.so 00:04:44.916 CC lib/rdma_provider/common.o 00:04:44.916 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:44.916 CC lib/jsonrpc/jsonrpc_server.o 00:04:44.916 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:44.916 CC lib/jsonrpc/jsonrpc_client.o 00:04:44.916 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:45.175 LIB libspdk_rdma_provider.a 00:04:45.175 LIB libspdk_jsonrpc.a 00:04:45.175 SO libspdk_rdma_provider.so.7.0 00:04:45.175 SO libspdk_jsonrpc.so.6.0 00:04:45.434 SYMLINK libspdk_rdma_provider.so 00:04:45.434 SYMLINK libspdk_jsonrpc.so 00:04:45.434 LIB libspdk_env_dpdk.a 00:04:45.434 SO libspdk_env_dpdk.so.15.1 00:04:45.693 SYMLINK libspdk_env_dpdk.so 00:04:45.693 CC lib/rpc/rpc.o 00:04:45.951 LIB libspdk_rpc.a 00:04:45.951 SO libspdk_rpc.so.6.0 00:04:45.951 SYMLINK libspdk_rpc.so 00:04:46.210 CC lib/keyring/keyring_rpc.o 00:04:46.210 CC lib/keyring/keyring.o 00:04:46.210 CC lib/trace/trace.o 00:04:46.210 CC lib/notify/notify.o 00:04:46.210 CC lib/notify/notify_rpc.o 00:04:46.210 CC lib/trace/trace_flags.o 00:04:46.210 CC lib/trace/trace_rpc.o 00:04:46.469 LIB libspdk_notify.a 00:04:46.469 SO libspdk_notify.so.6.0 00:04:46.469 LIB libspdk_keyring.a 00:04:46.469 SO libspdk_keyring.so.2.0 00:04:46.469 LIB libspdk_trace.a 00:04:46.469 SYMLINK libspdk_notify.so 00:04:46.469 SO libspdk_trace.so.11.0 00:04:46.469 SYMLINK libspdk_keyring.so 00:04:46.469 SYMLINK libspdk_trace.so 00:04:47.037 CC lib/thread/thread.o 00:04:47.037 CC lib/thread/iobuf.o 00:04:47.037 CC lib/sock/sock.o 00:04:47.037 CC lib/sock/sock_rpc.o 00:04:47.296 LIB libspdk_sock.a 00:04:47.296 SO libspdk_sock.so.10.0 00:04:47.296 SYMLINK libspdk_sock.so 00:04:47.554 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:47.554 CC lib/nvme/nvme_fabric.o 00:04:47.554 CC lib/nvme/nvme_ctrlr.o 00:04:47.554 CC lib/nvme/nvme_ns.o 00:04:47.554 CC lib/nvme/nvme_pcie_common.o 00:04:47.554 CC lib/nvme/nvme_ns_cmd.o 00:04:47.554 CC lib/nvme/nvme.o 00:04:47.554 CC lib/nvme/nvme_pcie.o 00:04:47.554 CC lib/nvme/nvme_qpair.o 00:04:47.554 CC lib/nvme/nvme_quirks.o 00:04:47.554 CC lib/nvme/nvme_transport.o 00:04:47.554 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:47.554 CC lib/nvme/nvme_discovery.o 00:04:47.554 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:47.554 CC lib/nvme/nvme_tcp.o 00:04:47.554 CC lib/nvme/nvme_opal.o 00:04:47.554 CC lib/nvme/nvme_io_msg.o 00:04:47.554 CC lib/nvme/nvme_poll_group.o 00:04:47.554 CC lib/nvme/nvme_zns.o 00:04:47.554 CC lib/nvme/nvme_stubs.o 00:04:47.554 CC lib/nvme/nvme_vfio_user.o 00:04:47.554 CC lib/nvme/nvme_auth.o 00:04:47.554 CC lib/nvme/nvme_cuse.o 00:04:47.554 CC lib/nvme/nvme_rdma.o 00:04:48.122 LIB libspdk_thread.a 00:04:48.122 SO libspdk_thread.so.11.0 00:04:48.122 SYMLINK libspdk_thread.so 00:04:48.381 CC lib/accel/accel.o 00:04:48.381 CC lib/accel/accel_rpc.o 00:04:48.381 CC lib/accel/accel_sw.o 00:04:48.381 CC lib/virtio/virtio_vfio_user.o 00:04:48.381 CC lib/virtio/virtio.o 00:04:48.381 CC lib/virtio/virtio_vhost_user.o 00:04:48.381 CC lib/virtio/virtio_pci.o 00:04:48.381 CC lib/blob/blobstore.o 00:04:48.381 CC lib/blob/request.o 00:04:48.381 CC lib/blob/zeroes.o 00:04:48.381 CC lib/blob/blob_bs_dev.o 00:04:48.381 CC lib/fsdev/fsdev.o 00:04:48.381 CC lib/fsdev/fsdev_io.o 00:04:48.381 CC lib/fsdev/fsdev_rpc.o 00:04:48.381 CC lib/vfu_tgt/tgt_endpoint.o 00:04:48.381 CC lib/vfu_tgt/tgt_rpc.o 00:04:48.381 CC lib/init/json_config.o 00:04:48.381 CC lib/init/subsystem.o 00:04:48.381 CC lib/init/subsystem_rpc.o 00:04:48.381 CC lib/init/rpc.o 00:04:48.639 LIB libspdk_init.a 00:04:48.639 SO libspdk_init.so.6.0 00:04:48.639 LIB libspdk_vfu_tgt.a 00:04:48.639 LIB libspdk_virtio.a 00:04:48.639 SYMLINK libspdk_init.so 00:04:48.639 SO libspdk_vfu_tgt.so.3.0 00:04:48.639 SO libspdk_virtio.so.7.0 00:04:48.639 SYMLINK libspdk_vfu_tgt.so 00:04:48.639 SYMLINK libspdk_virtio.so 00:04:48.898 LIB libspdk_fsdev.a 00:04:48.898 SO libspdk_fsdev.so.2.0 00:04:48.898 CC lib/event/app.o 00:04:48.898 CC lib/event/log_rpc.o 00:04:48.898 CC lib/event/reactor.o 00:04:48.898 CC lib/event/app_rpc.o 00:04:48.898 CC lib/event/scheduler_static.o 00:04:48.898 SYMLINK libspdk_fsdev.so 00:04:49.157 LIB libspdk_accel.a 00:04:49.157 SO libspdk_accel.so.16.0 00:04:49.157 SYMLINK libspdk_accel.so 00:04:49.157 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:49.157 LIB libspdk_nvme.a 00:04:49.416 LIB libspdk_event.a 00:04:49.416 SO libspdk_event.so.14.0 00:04:49.416 SO libspdk_nvme.so.15.0 00:04:49.416 SYMLINK libspdk_event.so 00:04:49.416 CC lib/bdev/bdev.o 00:04:49.416 CC lib/bdev/bdev_rpc.o 00:04:49.416 CC lib/bdev/bdev_zone.o 00:04:49.416 CC lib/bdev/part.o 00:04:49.416 CC lib/bdev/scsi_nvme.o 00:04:49.675 SYMLINK libspdk_nvme.so 00:04:49.675 LIB libspdk_fuse_dispatcher.a 00:04:49.675 SO libspdk_fuse_dispatcher.so.1.0 00:04:49.934 SYMLINK libspdk_fuse_dispatcher.so 00:04:50.503 LIB libspdk_blob.a 00:04:50.503 SO libspdk_blob.so.12.0 00:04:50.503 SYMLINK libspdk_blob.so 00:04:51.070 CC lib/lvol/lvol.o 00:04:51.070 CC lib/blobfs/blobfs.o 00:04:51.070 CC lib/blobfs/tree.o 00:04:51.329 LIB libspdk_bdev.a 00:04:51.329 SO libspdk_bdev.so.17.0 00:04:51.588 SYMLINK libspdk_bdev.so 00:04:51.588 LIB libspdk_blobfs.a 00:04:51.588 SO libspdk_blobfs.so.11.0 00:04:51.588 LIB libspdk_lvol.a 00:04:51.588 SO libspdk_lvol.so.11.0 00:04:51.588 SYMLINK libspdk_blobfs.so 00:04:51.588 SYMLINK libspdk_lvol.so 00:04:51.847 CC lib/ftl/ftl_core.o 00:04:51.847 CC lib/ftl/ftl_layout.o 00:04:51.847 CC lib/ftl/ftl_init.o 00:04:51.847 CC lib/ftl/ftl_io.o 00:04:51.847 CC lib/ftl/ftl_debug.o 00:04:51.847 CC lib/ftl/ftl_l2p_flat.o 00:04:51.847 CC lib/ftl/ftl_nv_cache.o 00:04:51.847 CC lib/ftl/ftl_sb.o 00:04:51.847 CC lib/ftl/ftl_l2p.o 00:04:51.847 CC lib/ftl/ftl_band_ops.o 00:04:51.847 CC lib/ftl/ftl_band.o 00:04:51.847 CC lib/ftl/ftl_writer.o 00:04:51.847 CC lib/ftl/ftl_l2p_cache.o 00:04:51.847 CC lib/ftl/ftl_rq.o 00:04:51.847 CC lib/ftl/ftl_reloc.o 00:04:51.847 CC lib/ftl/ftl_p2l.o 00:04:51.847 CC lib/ftl/ftl_p2l_log.o 00:04:51.847 CC lib/ftl/mngt/ftl_mngt.o 00:04:51.847 CC lib/nvmf/ctrlr.o 00:04:51.847 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:51.847 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:51.847 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:51.847 CC lib/nvmf/ctrlr_discovery.o 00:04:51.847 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:51.847 CC lib/nvmf/ctrlr_bdev.o 00:04:51.847 CC lib/nvmf/subsystem.o 00:04:51.847 CC lib/ublk/ublk_rpc.o 00:04:51.847 CC lib/nvmf/transport.o 00:04:51.847 CC lib/nvmf/nvmf.o 00:04:51.847 CC lib/ublk/ublk.o 00:04:51.847 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:51.847 CC lib/nvmf/tcp.o 00:04:51.847 CC lib/nvmf/nvmf_rpc.o 00:04:51.847 CC lib/nvmf/vfio_user.o 00:04:51.847 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:51.847 CC lib/nvmf/stubs.o 00:04:51.847 CC lib/nvmf/mdns_server.o 00:04:51.847 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:51.847 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:51.847 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:51.847 CC lib/nvmf/rdma.o 00:04:51.847 CC lib/scsi/dev.o 00:04:51.847 CC lib/nvmf/auth.o 00:04:51.847 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:51.847 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:51.847 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:51.847 CC lib/scsi/lun.o 00:04:51.847 CC lib/scsi/port.o 00:04:51.847 CC lib/ftl/utils/ftl_md.o 00:04:51.847 CC lib/scsi/scsi_bdev.o 00:04:51.847 CC lib/ftl/utils/ftl_bitmap.o 00:04:51.847 CC lib/ftl/utils/ftl_mempool.o 00:04:51.847 CC lib/ftl/utils/ftl_property.o 00:04:51.847 CC lib/ftl/utils/ftl_conf.o 00:04:51.847 CC lib/scsi/scsi.o 00:04:51.847 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:51.847 CC lib/nbd/nbd_rpc.o 00:04:51.847 CC lib/nbd/nbd.o 00:04:51.847 CC lib/scsi/task.o 00:04:51.847 CC lib/scsi/scsi_pr.o 00:04:51.847 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:51.847 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:51.847 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:51.847 CC lib/scsi/scsi_rpc.o 00:04:51.847 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:51.847 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:51.847 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:51.847 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:51.847 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:51.847 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:51.847 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:51.847 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:51.847 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:51.847 CC lib/ftl/base/ftl_base_bdev.o 00:04:51.847 CC lib/ftl/base/ftl_base_dev.o 00:04:51.847 CC lib/ftl/ftl_trace.o 00:04:52.413 LIB libspdk_nbd.a 00:04:52.413 SO libspdk_nbd.so.7.0 00:04:52.413 SYMLINK libspdk_nbd.so 00:04:52.413 LIB libspdk_scsi.a 00:04:52.413 SO libspdk_scsi.so.9.0 00:04:52.671 SYMLINK libspdk_scsi.so 00:04:52.671 LIB libspdk_ublk.a 00:04:52.671 SO libspdk_ublk.so.3.0 00:04:52.671 SYMLINK libspdk_ublk.so 00:04:52.671 LIB libspdk_ftl.a 00:04:52.930 CC lib/iscsi/conn.o 00:04:52.930 CC lib/iscsi/init_grp.o 00:04:52.930 CC lib/iscsi/iscsi.o 00:04:52.930 CC lib/iscsi/param.o 00:04:52.930 CC lib/iscsi/portal_grp.o 00:04:52.930 CC lib/iscsi/tgt_node.o 00:04:52.930 CC lib/iscsi/iscsi_subsystem.o 00:04:52.930 CC lib/iscsi/task.o 00:04:52.930 CC lib/iscsi/iscsi_rpc.o 00:04:52.930 CC lib/vhost/vhost.o 00:04:52.930 CC lib/vhost/vhost_rpc.o 00:04:52.930 CC lib/vhost/vhost_blk.o 00:04:52.930 CC lib/vhost/rte_vhost_user.o 00:04:52.930 CC lib/vhost/vhost_scsi.o 00:04:52.930 SO libspdk_ftl.so.9.0 00:04:53.189 SYMLINK libspdk_ftl.so 00:04:53.448 LIB libspdk_nvmf.a 00:04:53.448 SO libspdk_nvmf.so.20.0 00:04:53.706 SYMLINK libspdk_nvmf.so 00:04:53.706 LIB libspdk_vhost.a 00:04:53.706 SO libspdk_vhost.so.8.0 00:04:53.707 SYMLINK libspdk_vhost.so 00:04:53.965 LIB libspdk_iscsi.a 00:04:53.965 SO libspdk_iscsi.so.8.0 00:04:53.965 SYMLINK libspdk_iscsi.so 00:04:54.534 CC module/vfu_device/vfu_virtio.o 00:04:54.534 CC module/vfu_device/vfu_virtio_blk.o 00:04:54.534 CC module/vfu_device/vfu_virtio_fs.o 00:04:54.534 CC module/vfu_device/vfu_virtio_scsi.o 00:04:54.535 CC module/vfu_device/vfu_virtio_rpc.o 00:04:54.535 CC module/env_dpdk/env_dpdk_rpc.o 00:04:54.794 CC module/accel/ioat/accel_ioat_rpc.o 00:04:54.794 CC module/accel/ioat/accel_ioat.o 00:04:54.794 CC module/keyring/linux/keyring.o 00:04:54.794 CC module/keyring/file/keyring_rpc.o 00:04:54.794 CC module/keyring/file/keyring.o 00:04:54.794 CC module/keyring/linux/keyring_rpc.o 00:04:54.794 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:54.794 CC module/scheduler/gscheduler/gscheduler.o 00:04:54.794 CC module/sock/posix/posix.o 00:04:54.794 CC module/accel/error/accel_error.o 00:04:54.794 CC module/accel/error/accel_error_rpc.o 00:04:54.794 CC module/accel/dsa/accel_dsa.o 00:04:54.794 CC module/accel/dsa/accel_dsa_rpc.o 00:04:54.794 LIB libspdk_env_dpdk_rpc.a 00:04:54.794 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:54.794 CC module/fsdev/aio/fsdev_aio.o 00:04:54.794 CC module/fsdev/aio/linux_aio_mgr.o 00:04:54.794 CC module/blob/bdev/blob_bdev.o 00:04:54.794 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:54.794 CC module/accel/iaa/accel_iaa.o 00:04:54.794 CC module/accel/iaa/accel_iaa_rpc.o 00:04:54.794 SO libspdk_env_dpdk_rpc.so.6.0 00:04:54.794 SYMLINK libspdk_env_dpdk_rpc.so 00:04:54.794 LIB libspdk_scheduler_gscheduler.a 00:04:54.794 LIB libspdk_keyring_file.a 00:04:54.794 LIB libspdk_keyring_linux.a 00:04:54.794 SO libspdk_scheduler_gscheduler.so.4.0 00:04:54.794 SO libspdk_keyring_file.so.2.0 00:04:54.794 LIB libspdk_scheduler_dpdk_governor.a 00:04:54.794 LIB libspdk_scheduler_dynamic.a 00:04:54.794 SO libspdk_keyring_linux.so.1.0 00:04:54.794 LIB libspdk_accel_ioat.a 00:04:54.794 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:54.794 SO libspdk_scheduler_dynamic.so.4.0 00:04:54.794 SYMLINK libspdk_scheduler_gscheduler.so 00:04:54.794 LIB libspdk_accel_error.a 00:04:54.794 SYMLINK libspdk_keyring_file.so 00:04:54.794 SO libspdk_accel_ioat.so.6.0 00:04:55.052 LIB libspdk_accel_iaa.a 00:04:55.052 SO libspdk_accel_error.so.2.0 00:04:55.052 SYMLINK libspdk_keyring_linux.so 00:04:55.052 SYMLINK libspdk_scheduler_dynamic.so 00:04:55.052 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:55.052 LIB libspdk_accel_dsa.a 00:04:55.052 SO libspdk_accel_iaa.so.3.0 00:04:55.052 SYMLINK libspdk_accel_ioat.so 00:04:55.052 LIB libspdk_blob_bdev.a 00:04:55.052 SYMLINK libspdk_accel_error.so 00:04:55.052 SO libspdk_accel_dsa.so.5.0 00:04:55.052 SO libspdk_blob_bdev.so.12.0 00:04:55.052 SYMLINK libspdk_accel_iaa.so 00:04:55.052 SYMLINK libspdk_accel_dsa.so 00:04:55.052 LIB libspdk_vfu_device.a 00:04:55.052 SYMLINK libspdk_blob_bdev.so 00:04:55.052 SO libspdk_vfu_device.so.3.0 00:04:55.052 SYMLINK libspdk_vfu_device.so 00:04:55.312 LIB libspdk_fsdev_aio.a 00:04:55.312 LIB libspdk_sock_posix.a 00:04:55.312 SO libspdk_fsdev_aio.so.1.0 00:04:55.312 SO libspdk_sock_posix.so.6.0 00:04:55.312 SYMLINK libspdk_fsdev_aio.so 00:04:55.312 SYMLINK libspdk_sock_posix.so 00:04:55.570 CC module/bdev/lvol/vbdev_lvol.o 00:04:55.570 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:55.570 CC module/bdev/gpt/gpt.o 00:04:55.570 CC module/bdev/gpt/vbdev_gpt.o 00:04:55.570 CC module/bdev/delay/vbdev_delay.o 00:04:55.570 CC module/bdev/passthru/vbdev_passthru.o 00:04:55.570 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:55.570 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:55.570 CC module/bdev/error/vbdev_error.o 00:04:55.570 CC module/bdev/error/vbdev_error_rpc.o 00:04:55.570 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:55.570 CC module/bdev/iscsi/bdev_iscsi.o 00:04:55.570 CC module/bdev/malloc/bdev_malloc.o 00:04:55.570 CC module/bdev/nvme/bdev_nvme.o 00:04:55.570 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:55.570 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:55.570 CC module/bdev/raid/bdev_raid.o 00:04:55.570 CC module/bdev/nvme/nvme_rpc.o 00:04:55.570 CC module/bdev/nvme/bdev_mdns_client.o 00:04:55.570 CC module/bdev/raid/bdev_raid_sb.o 00:04:55.570 CC module/bdev/raid/raid0.o 00:04:55.570 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:55.570 CC module/bdev/nvme/vbdev_opal.o 00:04:55.570 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:55.570 CC module/bdev/raid/bdev_raid_rpc.o 00:04:55.570 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:55.570 CC module/bdev/raid/concat.o 00:04:55.570 CC module/bdev/raid/raid1.o 00:04:55.570 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:55.570 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:55.570 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:55.570 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:55.570 CC module/bdev/null/bdev_null.o 00:04:55.570 CC module/bdev/null/bdev_null_rpc.o 00:04:55.570 CC module/bdev/aio/bdev_aio.o 00:04:55.570 CC module/bdev/aio/bdev_aio_rpc.o 00:04:55.570 CC module/bdev/split/vbdev_split.o 00:04:55.570 CC module/bdev/split/vbdev_split_rpc.o 00:04:55.570 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:55.570 CC module/blobfs/bdev/blobfs_bdev.o 00:04:55.570 CC module/bdev/ftl/bdev_ftl.o 00:04:55.570 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:55.829 LIB libspdk_blobfs_bdev.a 00:04:55.829 SO libspdk_blobfs_bdev.so.6.0 00:04:55.829 LIB libspdk_bdev_error.a 00:04:55.829 LIB libspdk_bdev_split.a 00:04:55.829 LIB libspdk_bdev_passthru.a 00:04:55.829 LIB libspdk_bdev_gpt.a 00:04:55.829 LIB libspdk_bdev_null.a 00:04:55.829 SO libspdk_bdev_split.so.6.0 00:04:55.829 SO libspdk_bdev_passthru.so.6.0 00:04:55.829 LIB libspdk_bdev_ftl.a 00:04:55.829 SO libspdk_bdev_error.so.6.0 00:04:55.829 SO libspdk_bdev_gpt.so.6.0 00:04:55.829 SYMLINK libspdk_blobfs_bdev.so 00:04:55.829 LIB libspdk_bdev_zone_block.a 00:04:55.829 SO libspdk_bdev_null.so.6.0 00:04:55.829 LIB libspdk_bdev_delay.a 00:04:55.829 LIB libspdk_bdev_iscsi.a 00:04:55.829 SO libspdk_bdev_ftl.so.6.0 00:04:55.829 LIB libspdk_bdev_aio.a 00:04:55.829 SYMLINK libspdk_bdev_error.so 00:04:55.829 SO libspdk_bdev_iscsi.so.6.0 00:04:55.829 SO libspdk_bdev_zone_block.so.6.0 00:04:55.829 SYMLINK libspdk_bdev_split.so 00:04:55.829 LIB libspdk_bdev_malloc.a 00:04:55.829 SYMLINK libspdk_bdev_gpt.so 00:04:55.829 SYMLINK libspdk_bdev_passthru.so 00:04:55.829 SO libspdk_bdev_delay.so.6.0 00:04:55.829 SO libspdk_bdev_aio.so.6.0 00:04:55.829 SYMLINK libspdk_bdev_ftl.so 00:04:55.829 SYMLINK libspdk_bdev_null.so 00:04:56.088 SO libspdk_bdev_malloc.so.6.0 00:04:56.089 SYMLINK libspdk_bdev_iscsi.so 00:04:56.089 SYMLINK libspdk_bdev_zone_block.so 00:04:56.089 SYMLINK libspdk_bdev_delay.so 00:04:56.089 SYMLINK libspdk_bdev_aio.so 00:04:56.089 SYMLINK libspdk_bdev_malloc.so 00:04:56.089 LIB libspdk_bdev_lvol.a 00:04:56.089 LIB libspdk_bdev_virtio.a 00:04:56.089 SO libspdk_bdev_lvol.so.6.0 00:04:56.089 SO libspdk_bdev_virtio.so.6.0 00:04:56.089 SYMLINK libspdk_bdev_virtio.so 00:04:56.089 SYMLINK libspdk_bdev_lvol.so 00:04:56.348 LIB libspdk_bdev_raid.a 00:04:56.348 SO libspdk_bdev_raid.so.6.0 00:04:56.607 SYMLINK libspdk_bdev_raid.so 00:04:57.543 LIB libspdk_bdev_nvme.a 00:04:57.543 SO libspdk_bdev_nvme.so.7.1 00:04:57.543 SYMLINK libspdk_bdev_nvme.so 00:04:58.110 CC module/event/subsystems/keyring/keyring.o 00:04:58.110 CC module/event/subsystems/sock/sock.o 00:04:58.110 CC module/event/subsystems/scheduler/scheduler.o 00:04:58.110 CC module/event/subsystems/fsdev/fsdev.o 00:04:58.110 CC module/event/subsystems/vmd/vmd.o 00:04:58.110 CC module/event/subsystems/iobuf/iobuf.o 00:04:58.110 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:58.110 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:58.110 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:58.110 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:58.369 LIB libspdk_event_scheduler.a 00:04:58.369 LIB libspdk_event_keyring.a 00:04:58.369 LIB libspdk_event_fsdev.a 00:04:58.369 LIB libspdk_event_vhost_blk.a 00:04:58.369 LIB libspdk_event_sock.a 00:04:58.369 LIB libspdk_event_vfu_tgt.a 00:04:58.369 SO libspdk_event_scheduler.so.4.0 00:04:58.369 LIB libspdk_event_vmd.a 00:04:58.369 SO libspdk_event_vhost_blk.so.3.0 00:04:58.369 SO libspdk_event_keyring.so.1.0 00:04:58.369 LIB libspdk_event_iobuf.a 00:04:58.369 SO libspdk_event_fsdev.so.1.0 00:04:58.369 SO libspdk_event_sock.so.5.0 00:04:58.369 SO libspdk_event_vfu_tgt.so.3.0 00:04:58.369 SO libspdk_event_vmd.so.6.0 00:04:58.369 SYMLINK libspdk_event_scheduler.so 00:04:58.369 SO libspdk_event_iobuf.so.3.0 00:04:58.369 SYMLINK libspdk_event_keyring.so 00:04:58.369 SYMLINK libspdk_event_vhost_blk.so 00:04:58.369 SYMLINK libspdk_event_fsdev.so 00:04:58.369 SYMLINK libspdk_event_sock.so 00:04:58.369 SYMLINK libspdk_event_vfu_tgt.so 00:04:58.369 SYMLINK libspdk_event_vmd.so 00:04:58.369 SYMLINK libspdk_event_iobuf.so 00:04:58.627 CC module/event/subsystems/accel/accel.o 00:04:58.886 LIB libspdk_event_accel.a 00:04:58.886 SO libspdk_event_accel.so.6.0 00:04:58.886 SYMLINK libspdk_event_accel.so 00:04:59.144 CC module/event/subsystems/bdev/bdev.o 00:04:59.403 LIB libspdk_event_bdev.a 00:04:59.403 SO libspdk_event_bdev.so.6.0 00:04:59.403 SYMLINK libspdk_event_bdev.so 00:04:59.661 CC module/event/subsystems/scsi/scsi.o 00:04:59.661 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:59.661 CC module/event/subsystems/nbd/nbd.o 00:04:59.661 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:59.918 CC module/event/subsystems/ublk/ublk.o 00:04:59.918 LIB libspdk_event_scsi.a 00:04:59.918 LIB libspdk_event_nbd.a 00:04:59.918 SO libspdk_event_scsi.so.6.0 00:04:59.918 LIB libspdk_event_ublk.a 00:04:59.918 SO libspdk_event_nbd.so.6.0 00:04:59.918 SO libspdk_event_ublk.so.3.0 00:04:59.918 LIB libspdk_event_nvmf.a 00:04:59.918 SYMLINK libspdk_event_scsi.so 00:04:59.918 SYMLINK libspdk_event_nbd.so 00:04:59.918 SO libspdk_event_nvmf.so.6.0 00:04:59.918 SYMLINK libspdk_event_ublk.so 00:05:00.176 SYMLINK libspdk_event_nvmf.so 00:05:00.176 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:00.176 CC module/event/subsystems/iscsi/iscsi.o 00:05:00.435 LIB libspdk_event_vhost_scsi.a 00:05:00.435 SO libspdk_event_vhost_scsi.so.3.0 00:05:00.435 LIB libspdk_event_iscsi.a 00:05:00.435 SYMLINK libspdk_event_vhost_scsi.so 00:05:00.435 SO libspdk_event_iscsi.so.6.0 00:05:00.435 SYMLINK libspdk_event_iscsi.so 00:05:00.695 SO libspdk.so.6.0 00:05:00.695 SYMLINK libspdk.so 00:05:00.953 CC app/spdk_nvme_identify/identify.o 00:05:00.953 CC app/spdk_lspci/spdk_lspci.o 00:05:00.953 CC test/rpc_client/rpc_client_test.o 00:05:00.953 CC app/spdk_nvme_perf/perf.o 00:05:00.953 CC app/spdk_nvme_discover/discovery_aer.o 00:05:00.953 CXX app/trace/trace.o 00:05:00.953 TEST_HEADER include/spdk/accel.h 00:05:00.953 TEST_HEADER include/spdk/accel_module.h 00:05:00.953 TEST_HEADER include/spdk/assert.h 00:05:00.953 TEST_HEADER include/spdk/barrier.h 00:05:00.953 TEST_HEADER include/spdk/base64.h 00:05:00.953 TEST_HEADER include/spdk/bdev.h 00:05:00.953 TEST_HEADER include/spdk/bdev_module.h 00:05:00.953 TEST_HEADER include/spdk/bit_array.h 00:05:00.953 CC app/trace_record/trace_record.o 00:05:00.953 TEST_HEADER include/spdk/bit_pool.h 00:05:00.953 TEST_HEADER include/spdk/bdev_zone.h 00:05:00.953 TEST_HEADER include/spdk/blob_bdev.h 00:05:00.953 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:00.953 CC app/spdk_top/spdk_top.o 00:05:00.953 TEST_HEADER include/spdk/config.h 00:05:00.953 TEST_HEADER include/spdk/blobfs.h 00:05:00.953 TEST_HEADER include/spdk/blob.h 00:05:00.953 TEST_HEADER include/spdk/conf.h 00:05:00.953 TEST_HEADER include/spdk/cpuset.h 00:05:00.953 TEST_HEADER include/spdk/crc16.h 00:05:00.953 TEST_HEADER include/spdk/crc32.h 00:05:00.953 TEST_HEADER include/spdk/crc64.h 00:05:00.953 TEST_HEADER include/spdk/dif.h 00:05:00.953 TEST_HEADER include/spdk/dma.h 00:05:00.953 TEST_HEADER include/spdk/endian.h 00:05:00.953 TEST_HEADER include/spdk/env_dpdk.h 00:05:00.953 TEST_HEADER include/spdk/env.h 00:05:00.953 TEST_HEADER include/spdk/event.h 00:05:00.953 TEST_HEADER include/spdk/fd.h 00:05:00.953 TEST_HEADER include/spdk/fd_group.h 00:05:00.953 TEST_HEADER include/spdk/file.h 00:05:00.953 TEST_HEADER include/spdk/fsdev.h 00:05:00.953 TEST_HEADER include/spdk/fsdev_module.h 00:05:00.953 TEST_HEADER include/spdk/ftl.h 00:05:00.953 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:00.953 TEST_HEADER include/spdk/gpt_spec.h 00:05:00.953 TEST_HEADER include/spdk/histogram_data.h 00:05:00.953 TEST_HEADER include/spdk/hexlify.h 00:05:00.953 TEST_HEADER include/spdk/idxd.h 00:05:00.953 TEST_HEADER include/spdk/init.h 00:05:00.953 TEST_HEADER include/spdk/ioat.h 00:05:00.953 TEST_HEADER include/spdk/idxd_spec.h 00:05:00.953 TEST_HEADER include/spdk/ioat_spec.h 00:05:00.953 TEST_HEADER include/spdk/json.h 00:05:00.953 TEST_HEADER include/spdk/jsonrpc.h 00:05:00.953 TEST_HEADER include/spdk/iscsi_spec.h 00:05:00.953 TEST_HEADER include/spdk/keyring.h 00:05:00.953 TEST_HEADER include/spdk/likely.h 00:05:00.953 TEST_HEADER include/spdk/keyring_module.h 00:05:00.953 TEST_HEADER include/spdk/log.h 00:05:00.953 TEST_HEADER include/spdk/md5.h 00:05:00.953 TEST_HEADER include/spdk/lvol.h 00:05:00.953 TEST_HEADER include/spdk/memory.h 00:05:00.953 TEST_HEADER include/spdk/nbd.h 00:05:00.953 TEST_HEADER include/spdk/mmio.h 00:05:01.218 TEST_HEADER include/spdk/net.h 00:05:01.218 CC app/nvmf_tgt/nvmf_main.o 00:05:01.218 TEST_HEADER include/spdk/notify.h 00:05:01.218 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:01.218 TEST_HEADER include/spdk/nvme.h 00:05:01.218 TEST_HEADER include/spdk/nvme_intel.h 00:05:01.218 CC app/spdk_dd/spdk_dd.o 00:05:01.218 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:01.218 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:01.218 TEST_HEADER include/spdk/nvme_spec.h 00:05:01.218 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:01.218 TEST_HEADER include/spdk/nvme_zns.h 00:05:01.218 CC app/iscsi_tgt/iscsi_tgt.o 00:05:01.218 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:01.218 TEST_HEADER include/spdk/nvmf.h 00:05:01.218 TEST_HEADER include/spdk/nvmf_spec.h 00:05:01.218 TEST_HEADER include/spdk/nvmf_transport.h 00:05:01.218 TEST_HEADER include/spdk/opal.h 00:05:01.218 TEST_HEADER include/spdk/opal_spec.h 00:05:01.218 TEST_HEADER include/spdk/pci_ids.h 00:05:01.218 TEST_HEADER include/spdk/pipe.h 00:05:01.218 TEST_HEADER include/spdk/queue.h 00:05:01.218 TEST_HEADER include/spdk/reduce.h 00:05:01.218 TEST_HEADER include/spdk/rpc.h 00:05:01.218 TEST_HEADER include/spdk/scheduler.h 00:05:01.218 TEST_HEADER include/spdk/scsi.h 00:05:01.218 TEST_HEADER include/spdk/sock.h 00:05:01.218 TEST_HEADER include/spdk/scsi_spec.h 00:05:01.218 TEST_HEADER include/spdk/string.h 00:05:01.218 TEST_HEADER include/spdk/stdinc.h 00:05:01.218 TEST_HEADER include/spdk/thread.h 00:05:01.218 TEST_HEADER include/spdk/trace_parser.h 00:05:01.218 TEST_HEADER include/spdk/trace.h 00:05:01.218 TEST_HEADER include/spdk/tree.h 00:05:01.218 TEST_HEADER include/spdk/ublk.h 00:05:01.218 TEST_HEADER include/spdk/util.h 00:05:01.218 TEST_HEADER include/spdk/uuid.h 00:05:01.218 TEST_HEADER include/spdk/version.h 00:05:01.218 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:01.218 TEST_HEADER include/spdk/vhost.h 00:05:01.218 CC app/spdk_tgt/spdk_tgt.o 00:05:01.218 TEST_HEADER include/spdk/vmd.h 00:05:01.218 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:01.218 TEST_HEADER include/spdk/xor.h 00:05:01.218 TEST_HEADER include/spdk/zipf.h 00:05:01.218 CXX test/cpp_headers/accel.o 00:05:01.218 CXX test/cpp_headers/assert.o 00:05:01.218 CXX test/cpp_headers/barrier.o 00:05:01.218 CXX test/cpp_headers/accel_module.o 00:05:01.218 CXX test/cpp_headers/bdev.o 00:05:01.218 CXX test/cpp_headers/base64.o 00:05:01.218 CXX test/cpp_headers/bit_array.o 00:05:01.218 CXX test/cpp_headers/bdev_module.o 00:05:01.218 CXX test/cpp_headers/bdev_zone.o 00:05:01.218 CXX test/cpp_headers/blob_bdev.o 00:05:01.218 CXX test/cpp_headers/blobfs_bdev.o 00:05:01.218 CXX test/cpp_headers/blobfs.o 00:05:01.218 CXX test/cpp_headers/bit_pool.o 00:05:01.218 CXX test/cpp_headers/blob.o 00:05:01.218 CXX test/cpp_headers/conf.o 00:05:01.218 CXX test/cpp_headers/config.o 00:05:01.218 CXX test/cpp_headers/crc32.o 00:05:01.218 CXX test/cpp_headers/crc16.o 00:05:01.218 CXX test/cpp_headers/cpuset.o 00:05:01.218 CXX test/cpp_headers/env_dpdk.o 00:05:01.218 CXX test/cpp_headers/dif.o 00:05:01.218 CXX test/cpp_headers/crc64.o 00:05:01.218 CXX test/cpp_headers/dma.o 00:05:01.218 CXX test/cpp_headers/env.o 00:05:01.218 CXX test/cpp_headers/endian.o 00:05:01.218 CXX test/cpp_headers/event.o 00:05:01.218 CXX test/cpp_headers/file.o 00:05:01.218 CXX test/cpp_headers/fd.o 00:05:01.218 CXX test/cpp_headers/fsdev.o 00:05:01.218 CXX test/cpp_headers/fsdev_module.o 00:05:01.218 CXX test/cpp_headers/fd_group.o 00:05:01.218 CXX test/cpp_headers/ftl.o 00:05:01.218 CXX test/cpp_headers/fuse_dispatcher.o 00:05:01.218 CXX test/cpp_headers/hexlify.o 00:05:01.218 CXX test/cpp_headers/histogram_data.o 00:05:01.218 CXX test/cpp_headers/idxd.o 00:05:01.218 CXX test/cpp_headers/gpt_spec.o 00:05:01.218 CXX test/cpp_headers/init.o 00:05:01.218 CXX test/cpp_headers/idxd_spec.o 00:05:01.218 CXX test/cpp_headers/ioat_spec.o 00:05:01.218 CXX test/cpp_headers/ioat.o 00:05:01.218 CXX test/cpp_headers/json.o 00:05:01.218 CXX test/cpp_headers/iscsi_spec.o 00:05:01.218 CXX test/cpp_headers/keyring_module.o 00:05:01.218 CXX test/cpp_headers/keyring.o 00:05:01.218 CXX test/cpp_headers/likely.o 00:05:01.218 CXX test/cpp_headers/log.o 00:05:01.218 CXX test/cpp_headers/jsonrpc.o 00:05:01.218 CXX test/cpp_headers/lvol.o 00:05:01.218 CXX test/cpp_headers/md5.o 00:05:01.218 CXX test/cpp_headers/mmio.o 00:05:01.219 CXX test/cpp_headers/nbd.o 00:05:01.219 CXX test/cpp_headers/memory.o 00:05:01.219 CXX test/cpp_headers/net.o 00:05:01.219 CXX test/cpp_headers/nvme_intel.o 00:05:01.219 CXX test/cpp_headers/notify.o 00:05:01.219 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:01.219 CXX test/cpp_headers/nvme_ocssd.o 00:05:01.219 CXX test/cpp_headers/nvme.o 00:05:01.219 CXX test/cpp_headers/nvme_spec.o 00:05:01.219 CXX test/cpp_headers/nvme_zns.o 00:05:01.219 CXX test/cpp_headers/nvmf_cmd.o 00:05:01.219 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:01.219 CXX test/cpp_headers/nvmf.o 00:05:01.219 CXX test/cpp_headers/nvmf_transport.o 00:05:01.219 CXX test/cpp_headers/nvmf_spec.o 00:05:01.219 CXX test/cpp_headers/opal.o 00:05:01.219 CC test/thread/poller_perf/poller_perf.o 00:05:01.219 CC test/app/jsoncat/jsoncat.o 00:05:01.219 CC test/env/memory/memory_ut.o 00:05:01.219 CC test/app/histogram_perf/histogram_perf.o 00:05:01.219 CC test/app/stub/stub.o 00:05:01.219 CC examples/ioat/verify/verify.o 00:05:01.219 CC test/dma/test_dma/test_dma.o 00:05:01.219 CC examples/ioat/perf/perf.o 00:05:01.219 CC app/fio/nvme/fio_plugin.o 00:05:01.219 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:01.219 CC examples/util/zipf/zipf.o 00:05:01.219 CC test/env/vtophys/vtophys.o 00:05:01.219 CC test/env/pci/pci_ut.o 00:05:01.219 CC test/app/bdev_svc/bdev_svc.o 00:05:01.483 CC app/fio/bdev/fio_plugin.o 00:05:01.483 LINK spdk_lspci 00:05:01.483 LINK rpc_client_test 00:05:01.483 LINK spdk_nvme_discover 00:05:01.483 LINK interrupt_tgt 00:05:01.741 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:01.741 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:01.741 CC test/env/mem_callbacks/mem_callbacks.o 00:05:01.741 LINK iscsi_tgt 00:05:01.741 LINK jsoncat 00:05:01.741 LINK spdk_tgt 00:05:01.741 CXX test/cpp_headers/opal_spec.o 00:05:01.741 CXX test/cpp_headers/pci_ids.o 00:05:01.741 CXX test/cpp_headers/pipe.o 00:05:01.741 LINK nvmf_tgt 00:05:01.741 CXX test/cpp_headers/queue.o 00:05:01.741 CXX test/cpp_headers/reduce.o 00:05:01.742 CXX test/cpp_headers/rpc.o 00:05:01.742 CXX test/cpp_headers/scheduler.o 00:05:01.742 LINK vtophys 00:05:01.742 CXX test/cpp_headers/scsi.o 00:05:01.742 CXX test/cpp_headers/scsi_spec.o 00:05:01.742 CXX test/cpp_headers/sock.o 00:05:01.742 CXX test/cpp_headers/stdinc.o 00:05:01.742 CXX test/cpp_headers/string.o 00:05:01.742 CXX test/cpp_headers/thread.o 00:05:01.742 CXX test/cpp_headers/trace.o 00:05:01.742 CXX test/cpp_headers/trace_parser.o 00:05:01.742 CXX test/cpp_headers/tree.o 00:05:01.742 CXX test/cpp_headers/ublk.o 00:05:01.742 CXX test/cpp_headers/util.o 00:05:01.742 CXX test/cpp_headers/uuid.o 00:05:01.742 CXX test/cpp_headers/version.o 00:05:01.742 CXX test/cpp_headers/vfio_user_pci.o 00:05:01.742 CXX test/cpp_headers/vfio_user_spec.o 00:05:01.742 CXX test/cpp_headers/vhost.o 00:05:01.742 CXX test/cpp_headers/vmd.o 00:05:01.742 CXX test/cpp_headers/xor.o 00:05:01.742 CXX test/cpp_headers/zipf.o 00:05:01.742 LINK ioat_perf 00:05:01.742 LINK verify 00:05:01.742 LINK poller_perf 00:05:02.002 LINK histogram_perf 00:05:02.002 LINK spdk_trace_record 00:05:02.002 LINK spdk_dd 00:05:02.002 LINK zipf 00:05:02.002 LINK env_dpdk_post_init 00:05:02.002 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:02.002 LINK stub 00:05:02.002 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:02.002 LINK spdk_trace 00:05:02.002 LINK bdev_svc 00:05:02.002 LINK pci_ut 00:05:02.260 LINK spdk_nvme 00:05:02.260 LINK test_dma 00:05:02.260 LINK nvme_fuzz 00:05:02.260 CC app/vhost/vhost.o 00:05:02.260 LINK spdk_top 00:05:02.260 CC examples/idxd/perf/perf.o 00:05:02.260 CC test/event/reactor_perf/reactor_perf.o 00:05:02.260 CC test/event/event_perf/event_perf.o 00:05:02.260 CC test/event/reactor/reactor.o 00:05:02.260 CC examples/sock/hello_world/hello_sock.o 00:05:02.260 LINK spdk_bdev 00:05:02.260 LINK vhost_fuzz 00:05:02.260 CC test/event/app_repeat/app_repeat.o 00:05:02.260 CC test/event/scheduler/scheduler.o 00:05:02.260 CC examples/vmd/led/led.o 00:05:02.260 CC examples/vmd/lsvmd/lsvmd.o 00:05:02.519 LINK mem_callbacks 00:05:02.519 LINK spdk_nvme_identify 00:05:02.519 CC examples/thread/thread/thread_ex.o 00:05:02.519 LINK spdk_nvme_perf 00:05:02.519 LINK event_perf 00:05:02.519 LINK reactor_perf 00:05:02.519 LINK reactor 00:05:02.519 LINK vhost 00:05:02.519 LINK led 00:05:02.519 LINK lsvmd 00:05:02.519 LINK app_repeat 00:05:02.519 LINK hello_sock 00:05:02.519 LINK scheduler 00:05:02.519 LINK idxd_perf 00:05:02.519 LINK memory_ut 00:05:02.779 LINK thread 00:05:02.779 CC test/nvme/connect_stress/connect_stress.o 00:05:02.779 CC test/nvme/overhead/overhead.o 00:05:02.779 CC test/nvme/aer/aer.o 00:05:02.779 CC test/nvme/compliance/nvme_compliance.o 00:05:02.779 CC test/nvme/fused_ordering/fused_ordering.o 00:05:02.779 CC test/blobfs/mkfs/mkfs.o 00:05:02.779 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:02.779 CC test/nvme/e2edp/nvme_dp.o 00:05:02.779 CC test/nvme/simple_copy/simple_copy.o 00:05:02.779 CC test/nvme/reset/reset.o 00:05:02.779 CC test/nvme/reserve/reserve.o 00:05:02.779 CC test/nvme/err_injection/err_injection.o 00:05:02.779 CC test/nvme/cuse/cuse.o 00:05:02.779 CC test/nvme/startup/startup.o 00:05:02.779 CC test/nvme/fdp/fdp.o 00:05:02.779 CC test/nvme/boot_partition/boot_partition.o 00:05:02.779 CC test/nvme/sgl/sgl.o 00:05:02.779 CC test/accel/dif/dif.o 00:05:02.779 CC test/lvol/esnap/esnap.o 00:05:02.779 LINK boot_partition 00:05:02.779 LINK err_injection 00:05:02.779 LINK connect_stress 00:05:02.779 LINK fused_ordering 00:05:02.779 LINK doorbell_aers 00:05:02.779 LINK startup 00:05:02.779 LINK reserve 00:05:02.779 LINK mkfs 00:05:02.779 LINK simple_copy 00:05:03.038 LINK nvme_dp 00:05:03.038 LINK aer 00:05:03.038 LINK overhead 00:05:03.038 LINK reset 00:05:03.038 LINK sgl 00:05:03.038 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:03.038 LINK nvme_compliance 00:05:03.038 LINK fdp 00:05:03.038 CC examples/nvme/hotplug/hotplug.o 00:05:03.038 CC examples/nvme/reconnect/reconnect.o 00:05:03.038 CC examples/nvme/abort/abort.o 00:05:03.038 CC examples/nvme/arbitration/arbitration.o 00:05:03.038 CC examples/nvme/hello_world/hello_world.o 00:05:03.038 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:03.038 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:03.038 CC examples/accel/perf/accel_perf.o 00:05:03.038 CC examples/blob/cli/blobcli.o 00:05:03.038 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:03.038 CC examples/blob/hello_world/hello_blob.o 00:05:03.296 LINK iscsi_fuzz 00:05:03.296 LINK cmb_copy 00:05:03.296 LINK pmr_persistence 00:05:03.296 LINK hotplug 00:05:03.296 LINK hello_world 00:05:03.296 LINK dif 00:05:03.296 LINK arbitration 00:05:03.296 LINK abort 00:05:03.296 LINK reconnect 00:05:03.296 LINK nvme_manage 00:05:03.296 LINK hello_blob 00:05:03.296 LINK hello_fsdev 00:05:03.554 LINK accel_perf 00:05:03.554 LINK blobcli 00:05:03.813 LINK cuse 00:05:03.813 CC test/bdev/bdevio/bdevio.o 00:05:04.110 CC examples/bdev/hello_world/hello_bdev.o 00:05:04.110 CC examples/bdev/bdevperf/bdevperf.o 00:05:04.110 LINK bdevio 00:05:04.110 LINK hello_bdev 00:05:04.678 LINK bdevperf 00:05:05.246 CC examples/nvmf/nvmf/nvmf.o 00:05:05.246 LINK nvmf 00:05:06.620 LINK esnap 00:05:06.620 00:05:06.620 real 0m55.255s 00:05:06.620 user 7m57.682s 00:05:06.620 sys 3m35.667s 00:05:06.620 12:49:06 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:06.620 12:49:06 make -- common/autotest_common.sh@10 -- $ set +x 00:05:06.620 ************************************ 00:05:06.620 END TEST make 00:05:06.620 ************************************ 00:05:06.880 12:49:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:06.880 12:49:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:06.880 12:49:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:06.880 12:49:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.880 12:49:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:06.880 12:49:06 -- pm/common@44 -- $ pid=1721361 00:05:06.880 12:49:06 -- pm/common@50 -- $ kill -TERM 1721361 00:05:06.880 12:49:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.880 12:49:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:06.880 12:49:06 -- pm/common@44 -- $ pid=1721362 00:05:06.880 12:49:06 -- pm/common@50 -- $ kill -TERM 1721362 00:05:06.880 12:49:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.880 12:49:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:06.880 12:49:06 -- pm/common@44 -- $ pid=1721364 00:05:06.880 12:49:06 -- pm/common@50 -- $ kill -TERM 1721364 00:05:06.880 12:49:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.880 12:49:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:06.880 12:49:06 -- pm/common@44 -- $ pid=1721392 00:05:06.880 12:49:06 -- pm/common@50 -- $ sudo -E kill -TERM 1721392 00:05:06.880 12:49:06 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:06.880 12:49:06 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:06.880 12:49:06 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.880 12:49:06 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.880 12:49:06 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.880 12:49:06 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.880 12:49:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.880 12:49:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.880 12:49:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.880 12:49:06 -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.880 12:49:06 -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.880 12:49:06 -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.880 12:49:06 -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.880 12:49:06 -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.880 12:49:06 -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.880 12:49:06 -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.880 12:49:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.880 12:49:06 -- scripts/common.sh@344 -- # case "$op" in 00:05:06.880 12:49:06 -- scripts/common.sh@345 -- # : 1 00:05:06.880 12:49:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.880 12:49:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.880 12:49:06 -- scripts/common.sh@365 -- # decimal 1 00:05:06.880 12:49:06 -- scripts/common.sh@353 -- # local d=1 00:05:06.880 12:49:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.880 12:49:06 -- scripts/common.sh@355 -- # echo 1 00:05:06.880 12:49:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.880 12:49:06 -- scripts/common.sh@366 -- # decimal 2 00:05:06.880 12:49:06 -- scripts/common.sh@353 -- # local d=2 00:05:06.880 12:49:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.880 12:49:06 -- scripts/common.sh@355 -- # echo 2 00:05:06.880 12:49:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.880 12:49:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.880 12:49:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.880 12:49:06 -- scripts/common.sh@368 -- # return 0 00:05:06.880 12:49:06 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.880 12:49:06 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.880 --rc genhtml_branch_coverage=1 00:05:06.880 --rc genhtml_function_coverage=1 00:05:06.880 --rc genhtml_legend=1 00:05:06.880 --rc geninfo_all_blocks=1 00:05:06.880 --rc geninfo_unexecuted_blocks=1 00:05:06.880 00:05:06.880 ' 00:05:06.880 12:49:06 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.880 --rc genhtml_branch_coverage=1 00:05:06.880 --rc genhtml_function_coverage=1 00:05:06.880 --rc genhtml_legend=1 00:05:06.880 --rc geninfo_all_blocks=1 00:05:06.880 --rc geninfo_unexecuted_blocks=1 00:05:06.880 00:05:06.880 ' 00:05:06.880 12:49:06 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.880 --rc genhtml_branch_coverage=1 00:05:06.880 --rc genhtml_function_coverage=1 00:05:06.880 --rc genhtml_legend=1 00:05:06.880 --rc geninfo_all_blocks=1 00:05:06.880 --rc geninfo_unexecuted_blocks=1 00:05:06.880 00:05:06.880 ' 00:05:06.880 12:49:06 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.880 --rc genhtml_branch_coverage=1 00:05:06.880 --rc genhtml_function_coverage=1 00:05:06.880 --rc genhtml_legend=1 00:05:06.880 --rc geninfo_all_blocks=1 00:05:06.880 --rc geninfo_unexecuted_blocks=1 00:05:06.880 00:05:06.880 ' 00:05:06.880 12:49:06 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.880 12:49:06 -- nvmf/common.sh@7 -- # uname -s 00:05:06.880 12:49:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.880 12:49:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.880 12:49:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.880 12:49:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.880 12:49:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.880 12:49:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.880 12:49:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.880 12:49:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.880 12:49:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.880 12:49:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.880 12:49:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:06.880 12:49:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:06.880 12:49:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.880 12:49:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.880 12:49:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:06.880 12:49:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.880 12:49:06 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.880 12:49:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.880 12:49:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.880 12:49:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.880 12:49:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.880 12:49:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.881 12:49:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.881 12:49:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.881 12:49:06 -- paths/export.sh@5 -- # export PATH 00:05:06.881 12:49:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.881 12:49:06 -- nvmf/common.sh@51 -- # : 0 00:05:06.881 12:49:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.881 12:49:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.881 12:49:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.881 12:49:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.881 12:49:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.881 12:49:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.881 12:49:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.881 12:49:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.881 12:49:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.881 12:49:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:06.881 12:49:06 -- spdk/autotest.sh@32 -- # uname -s 00:05:06.881 12:49:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:06.881 12:49:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:06.881 12:49:06 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:06.881 12:49:06 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:06.881 12:49:06 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:06.881 12:49:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:06.881 12:49:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:06.881 12:49:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:06.881 12:49:06 -- spdk/autotest.sh@48 -- # udevadm_pid=1784303 00:05:06.881 12:49:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:06.881 12:49:06 -- pm/common@17 -- # local monitor 00:05:06.881 12:49:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.881 12:49:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:06.881 12:49:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.881 12:49:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.881 12:49:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:06.881 12:49:06 -- pm/common@25 -- # sleep 1 00:05:06.881 12:49:06 -- pm/common@21 -- # date +%s 00:05:06.881 12:49:06 -- pm/common@21 -- # date +%s 00:05:06.881 12:49:06 -- pm/common@21 -- # date +%s 00:05:06.881 12:49:06 -- pm/common@21 -- # date +%s 00:05:06.881 12:49:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732880946 00:05:06.881 12:49:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732880946 00:05:06.881 12:49:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732880946 00:05:06.881 12:49:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732880946 00:05:07.140 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732880946_collect-cpu-load.pm.log 00:05:07.140 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732880946_collect-vmstat.pm.log 00:05:07.140 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732880946_collect-cpu-temp.pm.log 00:05:07.140 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732880946_collect-bmc-pm.bmc.pm.log 00:05:08.076 12:49:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:08.076 12:49:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:08.076 12:49:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.076 12:49:07 -- common/autotest_common.sh@10 -- # set +x 00:05:08.076 12:49:07 -- spdk/autotest.sh@59 -- # create_test_list 00:05:08.076 12:49:07 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:08.076 12:49:07 -- common/autotest_common.sh@10 -- # set +x 00:05:08.076 12:49:07 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:08.076 12:49:07 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:08.076 12:49:07 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:08.076 12:49:07 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:08.076 12:49:07 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:08.076 12:49:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:08.076 12:49:07 -- common/autotest_common.sh@1457 -- # uname 00:05:08.076 12:49:07 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:08.076 12:49:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:08.076 12:49:07 -- common/autotest_common.sh@1477 -- # uname 00:05:08.076 12:49:07 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:08.076 12:49:07 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:08.076 12:49:07 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:08.076 lcov: LCOV version 1.15 00:05:08.076 12:49:07 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:22.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:22.949 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:35.149 12:49:33 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:35.149 12:49:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.149 12:49:33 -- common/autotest_common.sh@10 -- # set +x 00:05:35.149 12:49:33 -- spdk/autotest.sh@78 -- # rm -f 00:05:35.149 12:49:33 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:36.083 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:05:36.083 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:36.083 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:36.083 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:36.083 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:36.083 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:36.083 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:36.083 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:36.083 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:36.083 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:36.083 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:36.083 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:36.083 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:36.083 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:36.342 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:36.342 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:36.342 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:36.342 12:49:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:36.342 12:49:36 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:36.342 12:49:36 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:36.342 12:49:36 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:36.342 12:49:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:36.342 12:49:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:36.342 12:49:36 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:36.342 12:49:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:36.342 12:49:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:36.342 12:49:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:36.342 12:49:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:36.342 12:49:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:36.342 12:49:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:36.342 12:49:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:36.342 12:49:36 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:36.342 No valid GPT data, bailing 00:05:36.342 12:49:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:36.342 12:49:36 -- scripts/common.sh@394 -- # pt= 00:05:36.342 12:49:36 -- scripts/common.sh@395 -- # return 1 00:05:36.342 12:49:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:36.342 1+0 records in 00:05:36.342 1+0 records out 00:05:36.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00173622 s, 604 MB/s 00:05:36.342 12:49:36 -- spdk/autotest.sh@105 -- # sync 00:05:36.342 12:49:36 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:36.342 12:49:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:36.342 12:49:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:41.617 12:49:41 -- spdk/autotest.sh@111 -- # uname -s 00:05:41.617 12:49:41 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:41.617 12:49:41 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:41.617 12:49:41 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:44.152 Hugepages 00:05:44.152 node hugesize free / total 00:05:44.152 node0 1048576kB 0 / 0 00:05:44.152 node0 2048kB 0 / 0 00:05:44.152 node1 1048576kB 0 / 0 00:05:44.152 node1 2048kB 0 / 0 00:05:44.152 00:05:44.152 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:44.152 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:44.152 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:44.152 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:44.152 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:44.153 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:44.153 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:44.153 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:44.153 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:44.412 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:44.412 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:44.412 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:44.412 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:44.412 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:44.412 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:44.412 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:44.412 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:44.412 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:44.412 12:49:44 -- spdk/autotest.sh@117 -- # uname -s 00:05:44.412 12:49:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:44.412 12:49:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:44.412 12:49:44 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:46.947 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:46.947 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:47.516 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:47.776 12:49:47 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:48.716 12:49:48 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:48.716 12:49:48 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:48.716 12:49:48 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:48.716 12:49:48 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:48.716 12:49:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:48.716 12:49:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:48.716 12:49:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:48.716 12:49:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:48.716 12:49:48 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:48.716 12:49:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:48.716 12:49:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:48.716 12:49:48 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:51.442 Waiting for block devices as requested 00:05:51.442 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:51.722 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:51.722 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:51.722 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:52.001 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:52.001 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:52.001 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:52.001 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:52.268 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:52.268 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:52.268 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:52.268 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:52.526 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:52.526 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:52.526 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:52.785 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:52.785 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:52.785 12:49:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:52.785 12:49:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:52.785 12:49:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:52.785 12:49:52 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:05:52.785 12:49:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:52.786 12:49:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:52.786 12:49:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:52.786 12:49:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:52.786 12:49:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:52.786 12:49:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:52.786 12:49:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:52.786 12:49:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:52.786 12:49:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:52.786 12:49:52 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:05:52.786 12:49:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:52.786 12:49:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:52.786 12:49:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:52.786 12:49:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:52.786 12:49:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:52.786 12:49:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:52.786 12:49:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:52.786 12:49:52 -- common/autotest_common.sh@1543 -- # continue 00:05:52.786 12:49:52 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:52.786 12:49:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:52.786 12:49:52 -- common/autotest_common.sh@10 -- # set +x 00:05:52.786 12:49:52 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:52.786 12:49:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:52.786 12:49:52 -- common/autotest_common.sh@10 -- # set +x 00:05:53.045 12:49:52 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:55.580 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:55.580 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:56.147 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:56.406 12:49:55 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:56.406 12:49:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.406 12:49:55 -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 12:49:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:56.406 12:49:56 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:56.406 12:49:56 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:56.406 12:49:56 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:56.406 12:49:56 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:56.406 12:49:56 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:56.406 12:49:56 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:56.406 12:49:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:56.406 12:49:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:56.406 12:49:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:56.406 12:49:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:56.406 12:49:56 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:56.406 12:49:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:56.406 12:49:56 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:56.406 12:49:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:56.406 12:49:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:56.406 12:49:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:56.406 12:49:56 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:56.406 12:49:56 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:56.406 12:49:56 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:56.406 12:49:56 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:56.406 12:49:56 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:05:56.406 12:49:56 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:05:56.406 12:49:56 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1798520 00:05:56.406 12:49:56 -- common/autotest_common.sh@1585 -- # waitforlisten 1798520 00:05:56.406 12:49:56 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.406 12:49:56 -- common/autotest_common.sh@835 -- # '[' -z 1798520 ']' 00:05:56.406 12:49:56 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.406 12:49:56 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.406 12:49:56 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.406 12:49:56 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.406 12:49:56 -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 [2024-11-29 12:49:56.157076] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:05:56.406 [2024-11-29 12:49:56.157129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1798520 ] 00:05:56.406 [2024-11-29 12:49:56.220500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.665 [2024-11-29 12:49:56.264494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.665 12:49:56 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.665 12:49:56 -- common/autotest_common.sh@868 -- # return 0 00:05:56.665 12:49:56 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:56.665 12:49:56 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:56.665 12:49:56 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:59.953 nvme0n1 00:05:59.954 12:49:59 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:59.954 [2024-11-29 12:49:59.653293] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:59.954 request: 00:05:59.954 { 00:05:59.954 "nvme_ctrlr_name": "nvme0", 00:05:59.954 "password": "test", 00:05:59.954 "method": "bdev_nvme_opal_revert", 00:05:59.954 "req_id": 1 00:05:59.954 } 00:05:59.954 Got JSON-RPC error response 00:05:59.954 response: 00:05:59.954 { 00:05:59.954 "code": -32602, 00:05:59.954 "message": "Invalid parameters" 00:05:59.954 } 00:05:59.954 12:49:59 -- common/autotest_common.sh@1591 -- # true 00:05:59.954 12:49:59 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:59.954 12:49:59 -- common/autotest_common.sh@1595 -- # killprocess 1798520 00:05:59.954 12:49:59 -- common/autotest_common.sh@954 -- # '[' -z 1798520 ']' 00:05:59.954 12:49:59 -- common/autotest_common.sh@958 -- # kill -0 1798520 00:05:59.954 12:49:59 -- common/autotest_common.sh@959 -- # uname 00:05:59.954 12:49:59 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.954 12:49:59 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1798520 00:05:59.954 12:49:59 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.954 12:49:59 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.954 12:49:59 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1798520' 00:05:59.954 killing process with pid 1798520 00:05:59.954 12:49:59 -- common/autotest_common.sh@973 -- # kill 1798520 00:05:59.954 12:49:59 -- common/autotest_common.sh@978 -- # wait 1798520 00:06:01.858 12:50:01 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:01.858 12:50:01 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:01.858 12:50:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:01.858 12:50:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:01.858 12:50:01 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:01.858 12:50:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.858 12:50:01 -- common/autotest_common.sh@10 -- # set +x 00:06:01.858 12:50:01 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:01.858 12:50:01 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:01.858 12:50:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.858 12:50:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.858 12:50:01 -- common/autotest_common.sh@10 -- # set +x 00:06:01.858 ************************************ 00:06:01.858 START TEST env 00:06:01.858 ************************************ 00:06:01.858 12:50:01 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:01.858 * Looking for test storage... 00:06:01.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:01.858 12:50:01 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.858 12:50:01 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.858 12:50:01 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.858 12:50:01 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.858 12:50:01 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.858 12:50:01 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.858 12:50:01 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.858 12:50:01 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.858 12:50:01 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.858 12:50:01 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.858 12:50:01 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.858 12:50:01 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.858 12:50:01 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.858 12:50:01 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.858 12:50:01 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.858 12:50:01 env -- scripts/common.sh@344 -- # case "$op" in 00:06:01.858 12:50:01 env -- scripts/common.sh@345 -- # : 1 00:06:01.858 12:50:01 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.858 12:50:01 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.858 12:50:01 env -- scripts/common.sh@365 -- # decimal 1 00:06:01.858 12:50:01 env -- scripts/common.sh@353 -- # local d=1 00:06:01.858 12:50:01 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.858 12:50:01 env -- scripts/common.sh@355 -- # echo 1 00:06:01.858 12:50:01 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.858 12:50:01 env -- scripts/common.sh@366 -- # decimal 2 00:06:01.858 12:50:01 env -- scripts/common.sh@353 -- # local d=2 00:06:01.858 12:50:01 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.858 12:50:01 env -- scripts/common.sh@355 -- # echo 2 00:06:01.858 12:50:01 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.858 12:50:01 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.858 12:50:01 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.858 12:50:01 env -- scripts/common.sh@368 -- # return 0 00:06:01.858 12:50:01 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.858 12:50:01 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.858 --rc genhtml_branch_coverage=1 00:06:01.858 --rc genhtml_function_coverage=1 00:06:01.858 --rc genhtml_legend=1 00:06:01.858 --rc geninfo_all_blocks=1 00:06:01.858 --rc geninfo_unexecuted_blocks=1 00:06:01.858 00:06:01.858 ' 00:06:01.858 12:50:01 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.858 --rc genhtml_branch_coverage=1 00:06:01.858 --rc genhtml_function_coverage=1 00:06:01.858 --rc genhtml_legend=1 00:06:01.858 --rc geninfo_all_blocks=1 00:06:01.858 --rc geninfo_unexecuted_blocks=1 00:06:01.858 00:06:01.858 ' 00:06:01.858 12:50:01 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.858 --rc genhtml_branch_coverage=1 00:06:01.859 --rc genhtml_function_coverage=1 00:06:01.859 --rc genhtml_legend=1 00:06:01.859 --rc geninfo_all_blocks=1 00:06:01.859 --rc geninfo_unexecuted_blocks=1 00:06:01.859 00:06:01.859 ' 00:06:01.859 12:50:01 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.859 --rc genhtml_branch_coverage=1 00:06:01.859 --rc genhtml_function_coverage=1 00:06:01.859 --rc genhtml_legend=1 00:06:01.859 --rc geninfo_all_blocks=1 00:06:01.859 --rc geninfo_unexecuted_blocks=1 00:06:01.859 00:06:01.859 ' 00:06:01.859 12:50:01 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:01.859 12:50:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.859 12:50:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.859 12:50:01 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.859 ************************************ 00:06:01.859 START TEST env_memory 00:06:01.859 ************************************ 00:06:01.859 12:50:01 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:01.859 00:06:01.859 00:06:01.859 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.859 http://cunit.sourceforge.net/ 00:06:01.859 00:06:01.859 00:06:01.859 Suite: memory 00:06:01.859 Test: alloc and free memory map ...[2024-11-29 12:50:01.547699] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:01.859 passed 00:06:01.859 Test: mem map translation ...[2024-11-29 12:50:01.566778] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:01.859 [2024-11-29 12:50:01.566794] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:01.859 [2024-11-29 12:50:01.566830] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:01.859 [2024-11-29 12:50:01.566836] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:01.859 passed 00:06:01.859 Test: mem map registration ...[2024-11-29 12:50:01.605137] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:01.859 [2024-11-29 12:50:01.605153] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:01.859 passed 00:06:01.859 Test: mem map adjacent registrations ...passed 00:06:01.859 00:06:01.859 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.859 suites 1 1 n/a 0 0 00:06:01.859 tests 4 4 4 0 0 00:06:01.859 asserts 152 152 152 0 n/a 00:06:01.859 00:06:01.859 Elapsed time = 0.142 seconds 00:06:01.859 00:06:01.859 real 0m0.155s 00:06:01.859 user 0m0.147s 00:06:01.859 sys 0m0.007s 00:06:01.859 12:50:01 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.859 12:50:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:01.859 ************************************ 00:06:01.859 END TEST env_memory 00:06:01.859 ************************************ 00:06:02.118 12:50:01 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:02.118 12:50:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.118 12:50:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.118 12:50:01 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.118 ************************************ 00:06:02.118 START TEST env_vtophys 00:06:02.118 ************************************ 00:06:02.118 12:50:01 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:02.118 EAL: lib.eal log level changed from notice to debug 00:06:02.118 EAL: Detected lcore 0 as core 0 on socket 0 00:06:02.118 EAL: Detected lcore 1 as core 1 on socket 0 00:06:02.118 EAL: Detected lcore 2 as core 2 on socket 0 00:06:02.118 EAL: Detected lcore 3 as core 3 on socket 0 00:06:02.118 EAL: Detected lcore 4 as core 4 on socket 0 00:06:02.118 EAL: Detected lcore 5 as core 5 on socket 0 00:06:02.118 EAL: Detected lcore 6 as core 6 on socket 0 00:06:02.118 EAL: Detected lcore 7 as core 8 on socket 0 00:06:02.118 EAL: Detected lcore 8 as core 9 on socket 0 00:06:02.118 EAL: Detected lcore 9 as core 10 on socket 0 00:06:02.118 EAL: Detected lcore 10 as core 11 on socket 0 00:06:02.118 EAL: Detected lcore 11 as core 12 on socket 0 00:06:02.118 EAL: Detected lcore 12 as core 13 on socket 0 00:06:02.118 EAL: Detected lcore 13 as core 16 on socket 0 00:06:02.118 EAL: Detected lcore 14 as core 17 on socket 0 00:06:02.118 EAL: Detected lcore 15 as core 18 on socket 0 00:06:02.118 EAL: Detected lcore 16 as core 19 on socket 0 00:06:02.118 EAL: Detected lcore 17 as core 20 on socket 0 00:06:02.118 EAL: Detected lcore 18 as core 21 on socket 0 00:06:02.118 EAL: Detected lcore 19 as core 25 on socket 0 00:06:02.118 EAL: Detected lcore 20 as core 26 on socket 0 00:06:02.118 EAL: Detected lcore 21 as core 27 on socket 0 00:06:02.118 EAL: Detected lcore 22 as core 28 on socket 0 00:06:02.118 EAL: Detected lcore 23 as core 29 on socket 0 00:06:02.118 EAL: Detected lcore 24 as core 0 on socket 1 00:06:02.118 EAL: Detected lcore 25 as core 1 on socket 1 00:06:02.118 EAL: Detected lcore 26 as core 2 on socket 1 00:06:02.118 EAL: Detected lcore 27 as core 3 on socket 1 00:06:02.118 EAL: Detected lcore 28 as core 4 on socket 1 00:06:02.118 EAL: Detected lcore 29 as core 5 on socket 1 00:06:02.118 EAL: Detected lcore 30 as core 6 on socket 1 00:06:02.118 EAL: Detected lcore 31 as core 9 on socket 1 00:06:02.118 EAL: Detected lcore 32 as core 10 on socket 1 00:06:02.118 EAL: Detected lcore 33 as core 11 on socket 1 00:06:02.118 EAL: Detected lcore 34 as core 12 on socket 1 00:06:02.118 EAL: Detected lcore 35 as core 13 on socket 1 00:06:02.118 EAL: Detected lcore 36 as core 16 on socket 1 00:06:02.118 EAL: Detected lcore 37 as core 17 on socket 1 00:06:02.118 EAL: Detected lcore 38 as core 18 on socket 1 00:06:02.118 EAL: Detected lcore 39 as core 19 on socket 1 00:06:02.118 EAL: Detected lcore 40 as core 20 on socket 1 00:06:02.118 EAL: Detected lcore 41 as core 21 on socket 1 00:06:02.118 EAL: Detected lcore 42 as core 24 on socket 1 00:06:02.118 EAL: Detected lcore 43 as core 25 on socket 1 00:06:02.118 EAL: Detected lcore 44 as core 26 on socket 1 00:06:02.118 EAL: Detected lcore 45 as core 27 on socket 1 00:06:02.118 EAL: Detected lcore 46 as core 28 on socket 1 00:06:02.118 EAL: Detected lcore 47 as core 29 on socket 1 00:06:02.118 EAL: Detected lcore 48 as core 0 on socket 0 00:06:02.118 EAL: Detected lcore 49 as core 1 on socket 0 00:06:02.118 EAL: Detected lcore 50 as core 2 on socket 0 00:06:02.118 EAL: Detected lcore 51 as core 3 on socket 0 00:06:02.118 EAL: Detected lcore 52 as core 4 on socket 0 00:06:02.118 EAL: Detected lcore 53 as core 5 on socket 0 00:06:02.118 EAL: Detected lcore 54 as core 6 on socket 0 00:06:02.118 EAL: Detected lcore 55 as core 8 on socket 0 00:06:02.118 EAL: Detected lcore 56 as core 9 on socket 0 00:06:02.118 EAL: Detected lcore 57 as core 10 on socket 0 00:06:02.118 EAL: Detected lcore 58 as core 11 on socket 0 00:06:02.118 EAL: Detected lcore 59 as core 12 on socket 0 00:06:02.118 EAL: Detected lcore 60 as core 13 on socket 0 00:06:02.119 EAL: Detected lcore 61 as core 16 on socket 0 00:06:02.119 EAL: Detected lcore 62 as core 17 on socket 0 00:06:02.119 EAL: Detected lcore 63 as core 18 on socket 0 00:06:02.119 EAL: Detected lcore 64 as core 19 on socket 0 00:06:02.119 EAL: Detected lcore 65 as core 20 on socket 0 00:06:02.119 EAL: Detected lcore 66 as core 21 on socket 0 00:06:02.119 EAL: Detected lcore 67 as core 25 on socket 0 00:06:02.119 EAL: Detected lcore 68 as core 26 on socket 0 00:06:02.119 EAL: Detected lcore 69 as core 27 on socket 0 00:06:02.119 EAL: Detected lcore 70 as core 28 on socket 0 00:06:02.119 EAL: Detected lcore 71 as core 29 on socket 0 00:06:02.119 EAL: Detected lcore 72 as core 0 on socket 1 00:06:02.119 EAL: Detected lcore 73 as core 1 on socket 1 00:06:02.119 EAL: Detected lcore 74 as core 2 on socket 1 00:06:02.119 EAL: Detected lcore 75 as core 3 on socket 1 00:06:02.119 EAL: Detected lcore 76 as core 4 on socket 1 00:06:02.119 EAL: Detected lcore 77 as core 5 on socket 1 00:06:02.119 EAL: Detected lcore 78 as core 6 on socket 1 00:06:02.119 EAL: Detected lcore 79 as core 9 on socket 1 00:06:02.119 EAL: Detected lcore 80 as core 10 on socket 1 00:06:02.119 EAL: Detected lcore 81 as core 11 on socket 1 00:06:02.119 EAL: Detected lcore 82 as core 12 on socket 1 00:06:02.119 EAL: Detected lcore 83 as core 13 on socket 1 00:06:02.119 EAL: Detected lcore 84 as core 16 on socket 1 00:06:02.119 EAL: Detected lcore 85 as core 17 on socket 1 00:06:02.119 EAL: Detected lcore 86 as core 18 on socket 1 00:06:02.119 EAL: Detected lcore 87 as core 19 on socket 1 00:06:02.119 EAL: Detected lcore 88 as core 20 on socket 1 00:06:02.119 EAL: Detected lcore 89 as core 21 on socket 1 00:06:02.119 EAL: Detected lcore 90 as core 24 on socket 1 00:06:02.119 EAL: Detected lcore 91 as core 25 on socket 1 00:06:02.119 EAL: Detected lcore 92 as core 26 on socket 1 00:06:02.119 EAL: Detected lcore 93 as core 27 on socket 1 00:06:02.119 EAL: Detected lcore 94 as core 28 on socket 1 00:06:02.119 EAL: Detected lcore 95 as core 29 on socket 1 00:06:02.119 EAL: Maximum logical cores by configuration: 128 00:06:02.119 EAL: Detected CPU lcores: 96 00:06:02.119 EAL: Detected NUMA nodes: 2 00:06:02.119 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:02.119 EAL: Detected shared linkage of DPDK 00:06:02.119 EAL: No shared files mode enabled, IPC will be disabled 00:06:02.119 EAL: Bus pci wants IOVA as 'DC' 00:06:02.119 EAL: Buses did not request a specific IOVA mode. 00:06:02.119 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:02.119 EAL: Selected IOVA mode 'VA' 00:06:02.119 EAL: Probing VFIO support... 00:06:02.119 EAL: IOMMU type 1 (Type 1) is supported 00:06:02.119 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:02.119 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:02.119 EAL: VFIO support initialized 00:06:02.119 EAL: Ask a virtual area of 0x2e000 bytes 00:06:02.119 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:02.119 EAL: Setting up physically contiguous memory... 00:06:02.119 EAL: Setting maximum number of open files to 524288 00:06:02.119 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:02.119 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:02.119 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:02.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.119 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:02.119 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.119 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:02.119 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:02.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.119 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:02.119 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.119 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:02.119 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:02.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.119 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:02.119 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.119 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:02.119 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:02.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.119 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:02.119 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.119 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:02.119 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:02.119 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:02.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.119 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:02.119 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.119 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:02.119 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:02.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.119 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:02.119 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.119 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:02.119 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:02.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.119 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:02.119 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.119 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:02.119 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:02.119 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.119 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:02.119 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.119 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.119 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:02.119 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:02.119 EAL: Hugepages will be freed exactly as allocated. 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: TSC frequency is ~2300000 KHz 00:06:02.119 EAL: Main lcore 0 is ready (tid=7fd4f56e8a00;cpuset=[0]) 00:06:02.119 EAL: Trying to obtain current memory policy. 00:06:02.119 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.119 EAL: Restoring previous memory policy: 0 00:06:02.119 EAL: request: mp_malloc_sync 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: Heap on socket 0 was expanded by 2MB 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:02.119 EAL: Mem event callback 'spdk:(nil)' registered 00:06:02.119 00:06:02.119 00:06:02.119 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.119 http://cunit.sourceforge.net/ 00:06:02.119 00:06:02.119 00:06:02.119 Suite: components_suite 00:06:02.119 Test: vtophys_malloc_test ...passed 00:06:02.119 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:02.119 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.119 EAL: Restoring previous memory policy: 4 00:06:02.119 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.119 EAL: request: mp_malloc_sync 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: Heap on socket 0 was expanded by 4MB 00:06:02.119 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.119 EAL: request: mp_malloc_sync 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: Heap on socket 0 was shrunk by 4MB 00:06:02.119 EAL: Trying to obtain current memory policy. 00:06:02.119 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.119 EAL: Restoring previous memory policy: 4 00:06:02.119 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.119 EAL: request: mp_malloc_sync 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: Heap on socket 0 was expanded by 6MB 00:06:02.119 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.119 EAL: request: mp_malloc_sync 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: Heap on socket 0 was shrunk by 6MB 00:06:02.119 EAL: Trying to obtain current memory policy. 00:06:02.119 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.119 EAL: Restoring previous memory policy: 4 00:06:02.119 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.119 EAL: request: mp_malloc_sync 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: Heap on socket 0 was expanded by 10MB 00:06:02.119 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.119 EAL: request: mp_malloc_sync 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: Heap on socket 0 was shrunk by 10MB 00:06:02.119 EAL: Trying to obtain current memory policy. 00:06:02.119 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.119 EAL: Restoring previous memory policy: 4 00:06:02.119 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.119 EAL: request: mp_malloc_sync 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: Heap on socket 0 was expanded by 18MB 00:06:02.119 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.119 EAL: request: mp_malloc_sync 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: Heap on socket 0 was shrunk by 18MB 00:06:02.119 EAL: Trying to obtain current memory policy. 00:06:02.119 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.119 EAL: Restoring previous memory policy: 4 00:06:02.119 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.119 EAL: request: mp_malloc_sync 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: Heap on socket 0 was expanded by 34MB 00:06:02.119 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.119 EAL: request: mp_malloc_sync 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: Heap on socket 0 was shrunk by 34MB 00:06:02.119 EAL: Trying to obtain current memory policy. 00:06:02.119 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.119 EAL: Restoring previous memory policy: 4 00:06:02.119 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.119 EAL: request: mp_malloc_sync 00:06:02.119 EAL: No shared files mode enabled, IPC is disabled 00:06:02.119 EAL: Heap on socket 0 was expanded by 66MB 00:06:02.119 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.119 EAL: request: mp_malloc_sync 00:06:02.120 EAL: No shared files mode enabled, IPC is disabled 00:06:02.120 EAL: Heap on socket 0 was shrunk by 66MB 00:06:02.120 EAL: Trying to obtain current memory policy. 00:06:02.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.120 EAL: Restoring previous memory policy: 4 00:06:02.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.120 EAL: request: mp_malloc_sync 00:06:02.120 EAL: No shared files mode enabled, IPC is disabled 00:06:02.120 EAL: Heap on socket 0 was expanded by 130MB 00:06:02.120 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.120 EAL: request: mp_malloc_sync 00:06:02.120 EAL: No shared files mode enabled, IPC is disabled 00:06:02.120 EAL: Heap on socket 0 was shrunk by 130MB 00:06:02.120 EAL: Trying to obtain current memory policy. 00:06:02.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.379 EAL: Restoring previous memory policy: 4 00:06:02.379 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.379 EAL: request: mp_malloc_sync 00:06:02.379 EAL: No shared files mode enabled, IPC is disabled 00:06:02.379 EAL: Heap on socket 0 was expanded by 258MB 00:06:02.379 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.379 EAL: request: mp_malloc_sync 00:06:02.379 EAL: No shared files mode enabled, IPC is disabled 00:06:02.379 EAL: Heap on socket 0 was shrunk by 258MB 00:06:02.379 EAL: Trying to obtain current memory policy. 00:06:02.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.379 EAL: Restoring previous memory policy: 4 00:06:02.379 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.379 EAL: request: mp_malloc_sync 00:06:02.379 EAL: No shared files mode enabled, IPC is disabled 00:06:02.379 EAL: Heap on socket 0 was expanded by 514MB 00:06:02.637 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.637 EAL: request: mp_malloc_sync 00:06:02.637 EAL: No shared files mode enabled, IPC is disabled 00:06:02.637 EAL: Heap on socket 0 was shrunk by 514MB 00:06:02.637 EAL: Trying to obtain current memory policy. 00:06:02.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.896 EAL: Restoring previous memory policy: 4 00:06:02.896 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.896 EAL: request: mp_malloc_sync 00:06:02.896 EAL: No shared files mode enabled, IPC is disabled 00:06:02.896 EAL: Heap on socket 0 was expanded by 1026MB 00:06:02.896 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.155 EAL: request: mp_malloc_sync 00:06:03.155 EAL: No shared files mode enabled, IPC is disabled 00:06:03.155 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:03.155 passed 00:06:03.155 00:06:03.155 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.155 suites 1 1 n/a 0 0 00:06:03.155 tests 2 2 2 0 0 00:06:03.155 asserts 497 497 497 0 n/a 00:06:03.155 00:06:03.155 Elapsed time = 0.965 seconds 00:06:03.155 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.155 EAL: request: mp_malloc_sync 00:06:03.155 EAL: No shared files mode enabled, IPC is disabled 00:06:03.155 EAL: Heap on socket 0 was shrunk by 2MB 00:06:03.155 EAL: No shared files mode enabled, IPC is disabled 00:06:03.155 EAL: No shared files mode enabled, IPC is disabled 00:06:03.155 EAL: No shared files mode enabled, IPC is disabled 00:06:03.155 00:06:03.155 real 0m1.081s 00:06:03.155 user 0m0.634s 00:06:03.155 sys 0m0.424s 00:06:03.155 12:50:02 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.155 12:50:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:03.155 ************************************ 00:06:03.155 END TEST env_vtophys 00:06:03.155 ************************************ 00:06:03.155 12:50:02 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:03.155 12:50:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.155 12:50:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.155 12:50:02 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.155 ************************************ 00:06:03.155 START TEST env_pci 00:06:03.155 ************************************ 00:06:03.155 12:50:02 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:03.155 00:06:03.155 00:06:03.155 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.155 http://cunit.sourceforge.net/ 00:06:03.155 00:06:03.155 00:06:03.155 Suite: pci 00:06:03.155 Test: pci_hook ...[2024-11-29 12:50:02.894625] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1799737 has claimed it 00:06:03.155 EAL: Cannot find device (10000:00:01.0) 00:06:03.155 EAL: Failed to attach device on primary process 00:06:03.155 passed 00:06:03.155 00:06:03.155 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.155 suites 1 1 n/a 0 0 00:06:03.155 tests 1 1 1 0 0 00:06:03.155 asserts 25 25 25 0 n/a 00:06:03.155 00:06:03.155 Elapsed time = 0.028 seconds 00:06:03.155 00:06:03.155 real 0m0.048s 00:06:03.155 user 0m0.011s 00:06:03.155 sys 0m0.036s 00:06:03.155 12:50:02 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.155 12:50:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:03.155 ************************************ 00:06:03.155 END TEST env_pci 00:06:03.155 ************************************ 00:06:03.155 12:50:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:03.155 12:50:02 env -- env/env.sh@15 -- # uname 00:06:03.155 12:50:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:03.155 12:50:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:03.155 12:50:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:03.155 12:50:02 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:03.155 12:50:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.155 12:50:02 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.414 ************************************ 00:06:03.414 START TEST env_dpdk_post_init 00:06:03.414 ************************************ 00:06:03.414 12:50:03 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:03.415 EAL: Detected CPU lcores: 96 00:06:03.415 EAL: Detected NUMA nodes: 2 00:06:03.415 EAL: Detected shared linkage of DPDK 00:06:03.415 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:03.415 EAL: Selected IOVA mode 'VA' 00:06:03.415 EAL: VFIO support initialized 00:06:03.415 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:03.415 EAL: Using IOMMU type 1 (Type 1) 00:06:03.415 EAL: Ignore mapping IO port bar(1) 00:06:03.415 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:03.415 EAL: Ignore mapping IO port bar(1) 00:06:03.415 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:03.415 EAL: Ignore mapping IO port bar(1) 00:06:03.415 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:03.415 EAL: Ignore mapping IO port bar(1) 00:06:03.415 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:03.415 EAL: Ignore mapping IO port bar(1) 00:06:03.415 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:03.415 EAL: Ignore mapping IO port bar(1) 00:06:03.415 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:03.415 EAL: Ignore mapping IO port bar(1) 00:06:03.415 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:03.415 EAL: Ignore mapping IO port bar(1) 00:06:03.415 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:04.351 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:06:04.351 EAL: Ignore mapping IO port bar(1) 00:06:04.351 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:04.351 EAL: Ignore mapping IO port bar(1) 00:06:04.351 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:04.351 EAL: Ignore mapping IO port bar(1) 00:06:04.351 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:04.351 EAL: Ignore mapping IO port bar(1) 00:06:04.351 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:04.351 EAL: Ignore mapping IO port bar(1) 00:06:04.351 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:04.351 EAL: Ignore mapping IO port bar(1) 00:06:04.351 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:04.351 EAL: Ignore mapping IO port bar(1) 00:06:04.351 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:04.351 EAL: Ignore mapping IO port bar(1) 00:06:04.351 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:07.633 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:06:07.633 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:06:07.633 Starting DPDK initialization... 00:06:07.633 Starting SPDK post initialization... 00:06:07.633 SPDK NVMe probe 00:06:07.633 Attaching to 0000:5e:00.0 00:06:07.633 Attached to 0000:5e:00.0 00:06:07.633 Cleaning up... 00:06:07.633 00:06:07.633 real 0m4.344s 00:06:07.633 user 0m2.979s 00:06:07.633 sys 0m0.433s 00:06:07.633 12:50:07 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.633 12:50:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:07.633 ************************************ 00:06:07.633 END TEST env_dpdk_post_init 00:06:07.633 ************************************ 00:06:07.633 12:50:07 env -- env/env.sh@26 -- # uname 00:06:07.633 12:50:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:07.633 12:50:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:07.633 12:50:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.633 12:50:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.633 12:50:07 env -- common/autotest_common.sh@10 -- # set +x 00:06:07.633 ************************************ 00:06:07.633 START TEST env_mem_callbacks 00:06:07.633 ************************************ 00:06:07.633 12:50:07 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:07.633 EAL: Detected CPU lcores: 96 00:06:07.633 EAL: Detected NUMA nodes: 2 00:06:07.633 EAL: Detected shared linkage of DPDK 00:06:07.633 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:07.891 EAL: Selected IOVA mode 'VA' 00:06:07.891 EAL: VFIO support initialized 00:06:07.891 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:07.891 00:06:07.891 00:06:07.891 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.891 http://cunit.sourceforge.net/ 00:06:07.891 00:06:07.891 00:06:07.891 Suite: memory 00:06:07.891 Test: test ... 00:06:07.891 register 0x200000200000 2097152 00:06:07.891 malloc 3145728 00:06:07.891 register 0x200000400000 4194304 00:06:07.891 buf 0x200000500000 len 3145728 PASSED 00:06:07.891 malloc 64 00:06:07.891 buf 0x2000004fff40 len 64 PASSED 00:06:07.891 malloc 4194304 00:06:07.891 register 0x200000800000 6291456 00:06:07.891 buf 0x200000a00000 len 4194304 PASSED 00:06:07.891 free 0x200000500000 3145728 00:06:07.891 free 0x2000004fff40 64 00:06:07.891 unregister 0x200000400000 4194304 PASSED 00:06:07.891 free 0x200000a00000 4194304 00:06:07.891 unregister 0x200000800000 6291456 PASSED 00:06:07.891 malloc 8388608 00:06:07.891 register 0x200000400000 10485760 00:06:07.891 buf 0x200000600000 len 8388608 PASSED 00:06:07.891 free 0x200000600000 8388608 00:06:07.891 unregister 0x200000400000 10485760 PASSED 00:06:07.891 passed 00:06:07.891 00:06:07.891 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.891 suites 1 1 n/a 0 0 00:06:07.891 tests 1 1 1 0 0 00:06:07.891 asserts 15 15 15 0 n/a 00:06:07.891 00:06:07.891 Elapsed time = 0.005 seconds 00:06:07.891 00:06:07.891 real 0m0.056s 00:06:07.891 user 0m0.024s 00:06:07.891 sys 0m0.032s 00:06:07.891 12:50:07 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.891 12:50:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:07.891 ************************************ 00:06:07.891 END TEST env_mem_callbacks 00:06:07.891 ************************************ 00:06:07.891 00:06:07.891 real 0m6.184s 00:06:07.891 user 0m4.016s 00:06:07.891 sys 0m1.241s 00:06:07.891 12:50:07 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.891 12:50:07 env -- common/autotest_common.sh@10 -- # set +x 00:06:07.891 ************************************ 00:06:07.891 END TEST env 00:06:07.891 ************************************ 00:06:07.891 12:50:07 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:07.891 12:50:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.891 12:50:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.892 12:50:07 -- common/autotest_common.sh@10 -- # set +x 00:06:07.892 ************************************ 00:06:07.892 START TEST rpc 00:06:07.892 ************************************ 00:06:07.892 12:50:07 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:07.892 * Looking for test storage... 00:06:07.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:07.892 12:50:07 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.892 12:50:07 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.892 12:50:07 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.152 12:50:07 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.152 12:50:07 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.152 12:50:07 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.152 12:50:07 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.152 12:50:07 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.152 12:50:07 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.152 12:50:07 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.152 12:50:07 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.152 12:50:07 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.152 12:50:07 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.152 12:50:07 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.152 12:50:07 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.152 12:50:07 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:08.152 12:50:07 rpc -- scripts/common.sh@345 -- # : 1 00:06:08.152 12:50:07 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.152 12:50:07 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.152 12:50:07 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:08.152 12:50:07 rpc -- scripts/common.sh@353 -- # local d=1 00:06:08.152 12:50:07 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.152 12:50:07 rpc -- scripts/common.sh@355 -- # echo 1 00:06:08.152 12:50:07 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.152 12:50:07 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:08.152 12:50:07 rpc -- scripts/common.sh@353 -- # local d=2 00:06:08.152 12:50:07 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.152 12:50:07 rpc -- scripts/common.sh@355 -- # echo 2 00:06:08.152 12:50:07 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.152 12:50:07 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.152 12:50:07 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.152 12:50:07 rpc -- scripts/common.sh@368 -- # return 0 00:06:08.152 12:50:07 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.152 12:50:07 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.152 --rc genhtml_branch_coverage=1 00:06:08.152 --rc genhtml_function_coverage=1 00:06:08.152 --rc genhtml_legend=1 00:06:08.152 --rc geninfo_all_blocks=1 00:06:08.152 --rc geninfo_unexecuted_blocks=1 00:06:08.152 00:06:08.152 ' 00:06:08.152 12:50:07 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.152 --rc genhtml_branch_coverage=1 00:06:08.152 --rc genhtml_function_coverage=1 00:06:08.152 --rc genhtml_legend=1 00:06:08.152 --rc geninfo_all_blocks=1 00:06:08.152 --rc geninfo_unexecuted_blocks=1 00:06:08.152 00:06:08.152 ' 00:06:08.152 12:50:07 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.152 --rc genhtml_branch_coverage=1 00:06:08.152 --rc genhtml_function_coverage=1 00:06:08.152 --rc genhtml_legend=1 00:06:08.152 --rc geninfo_all_blocks=1 00:06:08.152 --rc geninfo_unexecuted_blocks=1 00:06:08.152 00:06:08.152 ' 00:06:08.152 12:50:07 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.152 --rc genhtml_branch_coverage=1 00:06:08.152 --rc genhtml_function_coverage=1 00:06:08.152 --rc genhtml_legend=1 00:06:08.152 --rc geninfo_all_blocks=1 00:06:08.152 --rc geninfo_unexecuted_blocks=1 00:06:08.152 00:06:08.152 ' 00:06:08.152 12:50:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1800674 00:06:08.152 12:50:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.152 12:50:07 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:08.152 12:50:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1800674 00:06:08.152 12:50:07 rpc -- common/autotest_common.sh@835 -- # '[' -z 1800674 ']' 00:06:08.152 12:50:07 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.152 12:50:07 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.152 12:50:07 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.152 12:50:07 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.152 12:50:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.152 [2024-11-29 12:50:07.810224] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:08.152 [2024-11-29 12:50:07.810271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1800674 ] 00:06:08.152 [2024-11-29 12:50:07.873593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.152 [2024-11-29 12:50:07.916068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:08.152 [2024-11-29 12:50:07.916104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1800674' to capture a snapshot of events at runtime. 00:06:08.152 [2024-11-29 12:50:07.916112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.152 [2024-11-29 12:50:07.916118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.152 [2024-11-29 12:50:07.916123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1800674 for offline analysis/debug. 00:06:08.152 [2024-11-29 12:50:07.916683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.411 12:50:08 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.411 12:50:08 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:08.411 12:50:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:08.411 12:50:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:08.411 12:50:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:08.411 12:50:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:08.411 12:50:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.411 12:50:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.411 12:50:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.411 ************************************ 00:06:08.411 START TEST rpc_integrity 00:06:08.411 ************************************ 00:06:08.411 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:08.411 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:08.411 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.411 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.411 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.411 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:08.411 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:08.411 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:08.411 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:08.411 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.411 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.411 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.411 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:08.411 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:08.411 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.411 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.411 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.411 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:08.411 { 00:06:08.411 "name": "Malloc0", 00:06:08.411 "aliases": [ 00:06:08.411 "f7808c15-3f6f-4932-80ac-2441a0eb2c30" 00:06:08.411 ], 00:06:08.411 "product_name": "Malloc disk", 00:06:08.411 "block_size": 512, 00:06:08.411 "num_blocks": 16384, 00:06:08.411 "uuid": "f7808c15-3f6f-4932-80ac-2441a0eb2c30", 00:06:08.411 "assigned_rate_limits": { 00:06:08.411 "rw_ios_per_sec": 0, 00:06:08.411 "rw_mbytes_per_sec": 0, 00:06:08.411 "r_mbytes_per_sec": 0, 00:06:08.411 "w_mbytes_per_sec": 0 00:06:08.411 }, 00:06:08.411 "claimed": false, 00:06:08.411 "zoned": false, 00:06:08.411 "supported_io_types": { 00:06:08.411 "read": true, 00:06:08.411 "write": true, 00:06:08.411 "unmap": true, 00:06:08.411 "flush": true, 00:06:08.411 "reset": true, 00:06:08.411 "nvme_admin": false, 00:06:08.411 "nvme_io": false, 00:06:08.411 "nvme_io_md": false, 00:06:08.411 "write_zeroes": true, 00:06:08.411 "zcopy": true, 00:06:08.411 "get_zone_info": false, 00:06:08.411 "zone_management": false, 00:06:08.412 "zone_append": false, 00:06:08.412 "compare": false, 00:06:08.412 "compare_and_write": false, 00:06:08.412 "abort": true, 00:06:08.412 "seek_hole": false, 00:06:08.412 "seek_data": false, 00:06:08.412 "copy": true, 00:06:08.412 "nvme_iov_md": false 00:06:08.412 }, 00:06:08.412 "memory_domains": [ 00:06:08.412 { 00:06:08.412 "dma_device_id": "system", 00:06:08.412 "dma_device_type": 1 00:06:08.412 }, 00:06:08.412 { 00:06:08.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.412 "dma_device_type": 2 00:06:08.412 } 00:06:08.412 ], 00:06:08.412 "driver_specific": {} 00:06:08.412 } 00:06:08.412 ]' 00:06:08.412 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:08.670 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:08.670 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:08.670 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.670 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.670 [2024-11-29 12:50:08.275358] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:08.670 [2024-11-29 12:50:08.275385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:08.670 [2024-11-29 12:50:08.275397] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2183280 00:06:08.670 [2024-11-29 12:50:08.275403] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:08.670 [2024-11-29 12:50:08.276480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:08.670 [2024-11-29 12:50:08.276501] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:08.670 Passthru0 00:06:08.670 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.670 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:08.670 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.670 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.670 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.670 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:08.670 { 00:06:08.670 "name": "Malloc0", 00:06:08.670 "aliases": [ 00:06:08.670 "f7808c15-3f6f-4932-80ac-2441a0eb2c30" 00:06:08.670 ], 00:06:08.670 "product_name": "Malloc disk", 00:06:08.670 "block_size": 512, 00:06:08.670 "num_blocks": 16384, 00:06:08.670 "uuid": "f7808c15-3f6f-4932-80ac-2441a0eb2c30", 00:06:08.670 "assigned_rate_limits": { 00:06:08.670 "rw_ios_per_sec": 0, 00:06:08.670 "rw_mbytes_per_sec": 0, 00:06:08.670 "r_mbytes_per_sec": 0, 00:06:08.670 "w_mbytes_per_sec": 0 00:06:08.670 }, 00:06:08.670 "claimed": true, 00:06:08.670 "claim_type": "exclusive_write", 00:06:08.670 "zoned": false, 00:06:08.670 "supported_io_types": { 00:06:08.670 "read": true, 00:06:08.670 "write": true, 00:06:08.670 "unmap": true, 00:06:08.670 "flush": true, 00:06:08.670 "reset": true, 00:06:08.670 "nvme_admin": false, 00:06:08.670 "nvme_io": false, 00:06:08.670 "nvme_io_md": false, 00:06:08.670 "write_zeroes": true, 00:06:08.670 "zcopy": true, 00:06:08.670 "get_zone_info": false, 00:06:08.670 "zone_management": false, 00:06:08.670 "zone_append": false, 00:06:08.670 "compare": false, 00:06:08.670 "compare_and_write": false, 00:06:08.670 "abort": true, 00:06:08.671 "seek_hole": false, 00:06:08.671 "seek_data": false, 00:06:08.671 "copy": true, 00:06:08.671 "nvme_iov_md": false 00:06:08.671 }, 00:06:08.671 "memory_domains": [ 00:06:08.671 { 00:06:08.671 "dma_device_id": "system", 00:06:08.671 "dma_device_type": 1 00:06:08.671 }, 00:06:08.671 { 00:06:08.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.671 "dma_device_type": 2 00:06:08.671 } 00:06:08.671 ], 00:06:08.671 "driver_specific": {} 00:06:08.671 }, 00:06:08.671 { 00:06:08.671 "name": "Passthru0", 00:06:08.671 "aliases": [ 00:06:08.671 "86577a0e-3cf2-5f75-a2a5-30e3f70f927a" 00:06:08.671 ], 00:06:08.671 "product_name": "passthru", 00:06:08.671 "block_size": 512, 00:06:08.671 "num_blocks": 16384, 00:06:08.671 "uuid": "86577a0e-3cf2-5f75-a2a5-30e3f70f927a", 00:06:08.671 "assigned_rate_limits": { 00:06:08.671 "rw_ios_per_sec": 0, 00:06:08.671 "rw_mbytes_per_sec": 0, 00:06:08.671 "r_mbytes_per_sec": 0, 00:06:08.671 "w_mbytes_per_sec": 0 00:06:08.671 }, 00:06:08.671 "claimed": false, 00:06:08.671 "zoned": false, 00:06:08.671 "supported_io_types": { 00:06:08.671 "read": true, 00:06:08.671 "write": true, 00:06:08.671 "unmap": true, 00:06:08.671 "flush": true, 00:06:08.671 "reset": true, 00:06:08.671 "nvme_admin": false, 00:06:08.671 "nvme_io": false, 00:06:08.671 "nvme_io_md": false, 00:06:08.671 "write_zeroes": true, 00:06:08.671 "zcopy": true, 00:06:08.671 "get_zone_info": false, 00:06:08.671 "zone_management": false, 00:06:08.671 "zone_append": false, 00:06:08.671 "compare": false, 00:06:08.671 "compare_and_write": false, 00:06:08.671 "abort": true, 00:06:08.671 "seek_hole": false, 00:06:08.671 "seek_data": false, 00:06:08.671 "copy": true, 00:06:08.671 "nvme_iov_md": false 00:06:08.671 }, 00:06:08.671 "memory_domains": [ 00:06:08.671 { 00:06:08.671 "dma_device_id": "system", 00:06:08.671 "dma_device_type": 1 00:06:08.671 }, 00:06:08.671 { 00:06:08.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.671 "dma_device_type": 2 00:06:08.671 } 00:06:08.671 ], 00:06:08.671 "driver_specific": { 00:06:08.671 "passthru": { 00:06:08.671 "name": "Passthru0", 00:06:08.671 "base_bdev_name": "Malloc0" 00:06:08.671 } 00:06:08.671 } 00:06:08.671 } 00:06:08.671 ]' 00:06:08.671 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:08.671 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:08.671 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:08.671 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.671 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.671 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.671 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:08.671 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.671 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.671 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.671 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:08.671 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.671 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.671 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.671 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:08.671 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:08.671 12:50:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:08.671 00:06:08.671 real 0m0.240s 00:06:08.671 user 0m0.136s 00:06:08.671 sys 0m0.034s 00:06:08.671 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.671 12:50:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.671 ************************************ 00:06:08.671 END TEST rpc_integrity 00:06:08.671 ************************************ 00:06:08.671 12:50:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:08.671 12:50:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.671 12:50:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.671 12:50:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.671 ************************************ 00:06:08.671 START TEST rpc_plugins 00:06:08.671 ************************************ 00:06:08.671 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:08.671 12:50:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:08.671 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.671 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.671 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.671 12:50:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:08.671 12:50:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:08.671 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.671 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.671 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.930 12:50:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:08.930 { 00:06:08.930 "name": "Malloc1", 00:06:08.930 "aliases": [ 00:06:08.930 "d90e7332-a431-4528-a270-d0d6a120f8bb" 00:06:08.930 ], 00:06:08.930 "product_name": "Malloc disk", 00:06:08.930 "block_size": 4096, 00:06:08.930 "num_blocks": 256, 00:06:08.930 "uuid": "d90e7332-a431-4528-a270-d0d6a120f8bb", 00:06:08.930 "assigned_rate_limits": { 00:06:08.930 "rw_ios_per_sec": 0, 00:06:08.930 "rw_mbytes_per_sec": 0, 00:06:08.930 "r_mbytes_per_sec": 0, 00:06:08.930 "w_mbytes_per_sec": 0 00:06:08.930 }, 00:06:08.930 "claimed": false, 00:06:08.930 "zoned": false, 00:06:08.930 "supported_io_types": { 00:06:08.930 "read": true, 00:06:08.930 "write": true, 00:06:08.930 "unmap": true, 00:06:08.930 "flush": true, 00:06:08.930 "reset": true, 00:06:08.930 "nvme_admin": false, 00:06:08.930 "nvme_io": false, 00:06:08.930 "nvme_io_md": false, 00:06:08.930 "write_zeroes": true, 00:06:08.930 "zcopy": true, 00:06:08.930 "get_zone_info": false, 00:06:08.930 "zone_management": false, 00:06:08.930 "zone_append": false, 00:06:08.930 "compare": false, 00:06:08.930 "compare_and_write": false, 00:06:08.930 "abort": true, 00:06:08.930 "seek_hole": false, 00:06:08.930 "seek_data": false, 00:06:08.930 "copy": true, 00:06:08.930 "nvme_iov_md": false 00:06:08.930 }, 00:06:08.930 "memory_domains": [ 00:06:08.930 { 00:06:08.930 "dma_device_id": "system", 00:06:08.930 "dma_device_type": 1 00:06:08.930 }, 00:06:08.930 { 00:06:08.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.930 "dma_device_type": 2 00:06:08.930 } 00:06:08.930 ], 00:06:08.930 "driver_specific": {} 00:06:08.930 } 00:06:08.930 ]' 00:06:08.930 12:50:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:08.930 12:50:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:08.930 12:50:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:08.930 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.930 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.930 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.930 12:50:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:08.930 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.930 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.930 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.930 12:50:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:08.930 12:50:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:08.930 12:50:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:08.930 00:06:08.930 real 0m0.126s 00:06:08.930 user 0m0.069s 00:06:08.930 sys 0m0.022s 00:06:08.930 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.930 12:50:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.930 ************************************ 00:06:08.930 END TEST rpc_plugins 00:06:08.930 ************************************ 00:06:08.930 12:50:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:08.930 12:50:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.930 12:50:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.930 12:50:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.930 ************************************ 00:06:08.930 START TEST rpc_trace_cmd_test 00:06:08.930 ************************************ 00:06:08.930 12:50:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:08.930 12:50:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:08.930 12:50:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:08.930 12:50:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.930 12:50:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.930 12:50:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.930 12:50:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:08.930 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1800674", 00:06:08.930 "tpoint_group_mask": "0x8", 00:06:08.930 "iscsi_conn": { 00:06:08.930 "mask": "0x2", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "scsi": { 00:06:08.930 "mask": "0x4", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "bdev": { 00:06:08.930 "mask": "0x8", 00:06:08.930 "tpoint_mask": "0xffffffffffffffff" 00:06:08.930 }, 00:06:08.930 "nvmf_rdma": { 00:06:08.930 "mask": "0x10", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "nvmf_tcp": { 00:06:08.930 "mask": "0x20", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "ftl": { 00:06:08.930 "mask": "0x40", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "blobfs": { 00:06:08.930 "mask": "0x80", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "dsa": { 00:06:08.930 "mask": "0x200", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "thread": { 00:06:08.930 "mask": "0x400", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "nvme_pcie": { 00:06:08.930 "mask": "0x800", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "iaa": { 00:06:08.930 "mask": "0x1000", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "nvme_tcp": { 00:06:08.930 "mask": "0x2000", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "bdev_nvme": { 00:06:08.930 "mask": "0x4000", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "sock": { 00:06:08.930 "mask": "0x8000", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "blob": { 00:06:08.930 "mask": "0x10000", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "bdev_raid": { 00:06:08.930 "mask": "0x20000", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.930 }, 00:06:08.930 "scheduler": { 00:06:08.930 "mask": "0x40000", 00:06:08.930 "tpoint_mask": "0x0" 00:06:08.931 } 00:06:08.931 }' 00:06:08.931 12:50:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:08.931 12:50:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:08.931 12:50:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:08.931 12:50:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:08.931 12:50:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:09.190 12:50:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:09.190 12:50:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:09.190 12:50:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:09.190 12:50:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:09.190 12:50:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:09.190 00:06:09.190 real 0m0.213s 00:06:09.190 user 0m0.179s 00:06:09.190 sys 0m0.026s 00:06:09.190 12:50:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.190 12:50:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.190 ************************************ 00:06:09.190 END TEST rpc_trace_cmd_test 00:06:09.190 ************************************ 00:06:09.190 12:50:08 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:09.190 12:50:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:09.190 12:50:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:09.190 12:50:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.190 12:50:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.190 12:50:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.190 ************************************ 00:06:09.190 START TEST rpc_daemon_integrity 00:06:09.190 ************************************ 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:09.190 { 00:06:09.190 "name": "Malloc2", 00:06:09.190 "aliases": [ 00:06:09.190 "e9ccd9de-7b6c-4d57-a304-ce7817bcda76" 00:06:09.190 ], 00:06:09.190 "product_name": "Malloc disk", 00:06:09.190 "block_size": 512, 00:06:09.190 "num_blocks": 16384, 00:06:09.190 "uuid": "e9ccd9de-7b6c-4d57-a304-ce7817bcda76", 00:06:09.190 "assigned_rate_limits": { 00:06:09.190 "rw_ios_per_sec": 0, 00:06:09.190 "rw_mbytes_per_sec": 0, 00:06:09.190 "r_mbytes_per_sec": 0, 00:06:09.190 "w_mbytes_per_sec": 0 00:06:09.190 }, 00:06:09.190 "claimed": false, 00:06:09.190 "zoned": false, 00:06:09.190 "supported_io_types": { 00:06:09.190 "read": true, 00:06:09.190 "write": true, 00:06:09.190 "unmap": true, 00:06:09.190 "flush": true, 00:06:09.190 "reset": true, 00:06:09.190 "nvme_admin": false, 00:06:09.190 "nvme_io": false, 00:06:09.190 "nvme_io_md": false, 00:06:09.190 "write_zeroes": true, 00:06:09.190 "zcopy": true, 00:06:09.190 "get_zone_info": false, 00:06:09.190 "zone_management": false, 00:06:09.190 "zone_append": false, 00:06:09.190 "compare": false, 00:06:09.190 "compare_and_write": false, 00:06:09.190 "abort": true, 00:06:09.190 "seek_hole": false, 00:06:09.190 "seek_data": false, 00:06:09.190 "copy": true, 00:06:09.190 "nvme_iov_md": false 00:06:09.190 }, 00:06:09.190 "memory_domains": [ 00:06:09.190 { 00:06:09.190 "dma_device_id": "system", 00:06:09.190 "dma_device_type": 1 00:06:09.190 }, 00:06:09.190 { 00:06:09.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.190 "dma_device_type": 2 00:06:09.190 } 00:06:09.190 ], 00:06:09.190 "driver_specific": {} 00:06:09.190 } 00:06:09.190 ]' 00:06:09.190 12:50:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.450 [2024-11-29 12:50:09.037434] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:09.450 [2024-11-29 12:50:09.037460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:09.450 [2024-11-29 12:50:09.037472] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2185150 00:06:09.450 [2024-11-29 12:50:09.037479] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:09.450 [2024-11-29 12:50:09.038493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:09.450 [2024-11-29 12:50:09.038513] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:09.450 Passthru0 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:09.450 { 00:06:09.450 "name": "Malloc2", 00:06:09.450 "aliases": [ 00:06:09.450 "e9ccd9de-7b6c-4d57-a304-ce7817bcda76" 00:06:09.450 ], 00:06:09.450 "product_name": "Malloc disk", 00:06:09.450 "block_size": 512, 00:06:09.450 "num_blocks": 16384, 00:06:09.450 "uuid": "e9ccd9de-7b6c-4d57-a304-ce7817bcda76", 00:06:09.450 "assigned_rate_limits": { 00:06:09.450 "rw_ios_per_sec": 0, 00:06:09.450 "rw_mbytes_per_sec": 0, 00:06:09.450 "r_mbytes_per_sec": 0, 00:06:09.450 "w_mbytes_per_sec": 0 00:06:09.450 }, 00:06:09.450 "claimed": true, 00:06:09.450 "claim_type": "exclusive_write", 00:06:09.450 "zoned": false, 00:06:09.450 "supported_io_types": { 00:06:09.450 "read": true, 00:06:09.450 "write": true, 00:06:09.450 "unmap": true, 00:06:09.450 "flush": true, 00:06:09.450 "reset": true, 00:06:09.450 "nvme_admin": false, 00:06:09.450 "nvme_io": false, 00:06:09.450 "nvme_io_md": false, 00:06:09.450 "write_zeroes": true, 00:06:09.450 "zcopy": true, 00:06:09.450 "get_zone_info": false, 00:06:09.450 "zone_management": false, 00:06:09.450 "zone_append": false, 00:06:09.450 "compare": false, 00:06:09.450 "compare_and_write": false, 00:06:09.450 "abort": true, 00:06:09.450 "seek_hole": false, 00:06:09.450 "seek_data": false, 00:06:09.450 "copy": true, 00:06:09.450 "nvme_iov_md": false 00:06:09.450 }, 00:06:09.450 "memory_domains": [ 00:06:09.450 { 00:06:09.450 "dma_device_id": "system", 00:06:09.450 "dma_device_type": 1 00:06:09.450 }, 00:06:09.450 { 00:06:09.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.450 "dma_device_type": 2 00:06:09.450 } 00:06:09.450 ], 00:06:09.450 "driver_specific": {} 00:06:09.450 }, 00:06:09.450 { 00:06:09.450 "name": "Passthru0", 00:06:09.450 "aliases": [ 00:06:09.450 "97f2b068-e4dc-5ce8-afa9-0d937f12994c" 00:06:09.450 ], 00:06:09.450 "product_name": "passthru", 00:06:09.450 "block_size": 512, 00:06:09.450 "num_blocks": 16384, 00:06:09.450 "uuid": "97f2b068-e4dc-5ce8-afa9-0d937f12994c", 00:06:09.450 "assigned_rate_limits": { 00:06:09.450 "rw_ios_per_sec": 0, 00:06:09.450 "rw_mbytes_per_sec": 0, 00:06:09.450 "r_mbytes_per_sec": 0, 00:06:09.450 "w_mbytes_per_sec": 0 00:06:09.450 }, 00:06:09.450 "claimed": false, 00:06:09.450 "zoned": false, 00:06:09.450 "supported_io_types": { 00:06:09.450 "read": true, 00:06:09.450 "write": true, 00:06:09.450 "unmap": true, 00:06:09.450 "flush": true, 00:06:09.450 "reset": true, 00:06:09.450 "nvme_admin": false, 00:06:09.450 "nvme_io": false, 00:06:09.450 "nvme_io_md": false, 00:06:09.450 "write_zeroes": true, 00:06:09.450 "zcopy": true, 00:06:09.450 "get_zone_info": false, 00:06:09.450 "zone_management": false, 00:06:09.450 "zone_append": false, 00:06:09.450 "compare": false, 00:06:09.450 "compare_and_write": false, 00:06:09.450 "abort": true, 00:06:09.450 "seek_hole": false, 00:06:09.450 "seek_data": false, 00:06:09.450 "copy": true, 00:06:09.450 "nvme_iov_md": false 00:06:09.450 }, 00:06:09.450 "memory_domains": [ 00:06:09.450 { 00:06:09.450 "dma_device_id": "system", 00:06:09.450 "dma_device_type": 1 00:06:09.450 }, 00:06:09.450 { 00:06:09.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.450 "dma_device_type": 2 00:06:09.450 } 00:06:09.450 ], 00:06:09.450 "driver_specific": { 00:06:09.450 "passthru": { 00:06:09.450 "name": "Passthru0", 00:06:09.450 "base_bdev_name": "Malloc2" 00:06:09.450 } 00:06:09.450 } 00:06:09.450 } 00:06:09.450 ]' 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:09.450 00:06:09.450 real 0m0.227s 00:06:09.450 user 0m0.147s 00:06:09.450 sys 0m0.026s 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.450 12:50:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.450 ************************************ 00:06:09.451 END TEST rpc_daemon_integrity 00:06:09.451 ************************************ 00:06:09.451 12:50:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:09.451 12:50:09 rpc -- rpc/rpc.sh@84 -- # killprocess 1800674 00:06:09.451 12:50:09 rpc -- common/autotest_common.sh@954 -- # '[' -z 1800674 ']' 00:06:09.451 12:50:09 rpc -- common/autotest_common.sh@958 -- # kill -0 1800674 00:06:09.451 12:50:09 rpc -- common/autotest_common.sh@959 -- # uname 00:06:09.451 12:50:09 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.451 12:50:09 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1800674 00:06:09.451 12:50:09 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.451 12:50:09 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.451 12:50:09 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1800674' 00:06:09.451 killing process with pid 1800674 00:06:09.451 12:50:09 rpc -- common/autotest_common.sh@973 -- # kill 1800674 00:06:09.451 12:50:09 rpc -- common/autotest_common.sh@978 -- # wait 1800674 00:06:10.020 00:06:10.020 real 0m1.951s 00:06:10.020 user 0m2.459s 00:06:10.020 sys 0m0.653s 00:06:10.020 12:50:09 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.020 12:50:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.020 ************************************ 00:06:10.020 END TEST rpc 00:06:10.020 ************************************ 00:06:10.020 12:50:09 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.020 12:50:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.020 12:50:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.020 12:50:09 -- common/autotest_common.sh@10 -- # set +x 00:06:10.020 ************************************ 00:06:10.020 START TEST skip_rpc 00:06:10.020 ************************************ 00:06:10.020 12:50:09 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.020 * Looking for test storage... 00:06:10.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:10.020 12:50:09 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.020 12:50:09 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.020 12:50:09 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.020 12:50:09 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.020 12:50:09 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:10.020 12:50:09 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.020 12:50:09 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.020 --rc genhtml_branch_coverage=1 00:06:10.020 --rc genhtml_function_coverage=1 00:06:10.020 --rc genhtml_legend=1 00:06:10.020 --rc geninfo_all_blocks=1 00:06:10.020 --rc geninfo_unexecuted_blocks=1 00:06:10.020 00:06:10.020 ' 00:06:10.020 12:50:09 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.020 --rc genhtml_branch_coverage=1 00:06:10.020 --rc genhtml_function_coverage=1 00:06:10.020 --rc genhtml_legend=1 00:06:10.020 --rc geninfo_all_blocks=1 00:06:10.020 --rc geninfo_unexecuted_blocks=1 00:06:10.020 00:06:10.020 ' 00:06:10.020 12:50:09 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.020 --rc genhtml_branch_coverage=1 00:06:10.020 --rc genhtml_function_coverage=1 00:06:10.020 --rc genhtml_legend=1 00:06:10.020 --rc geninfo_all_blocks=1 00:06:10.020 --rc geninfo_unexecuted_blocks=1 00:06:10.020 00:06:10.020 ' 00:06:10.020 12:50:09 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.020 --rc genhtml_branch_coverage=1 00:06:10.020 --rc genhtml_function_coverage=1 00:06:10.020 --rc genhtml_legend=1 00:06:10.020 --rc geninfo_all_blocks=1 00:06:10.020 --rc geninfo_unexecuted_blocks=1 00:06:10.020 00:06:10.020 ' 00:06:10.020 12:50:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:10.020 12:50:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.020 12:50:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:10.020 12:50:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.020 12:50:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.020 12:50:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.020 ************************************ 00:06:10.020 START TEST skip_rpc 00:06:10.020 ************************************ 00:06:10.020 12:50:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:10.020 12:50:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1801268 00:06:10.020 12:50:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.020 12:50:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:10.020 12:50:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:10.280 [2024-11-29 12:50:09.868588] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:10.280 [2024-11-29 12:50:09.868627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1801268 ] 00:06:10.280 [2024-11-29 12:50:09.928992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.280 [2024-11-29 12:50:09.972436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1801268 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1801268 ']' 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1801268 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1801268 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1801268' 00:06:15.547 killing process with pid 1801268 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1801268 00:06:15.547 12:50:14 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1801268 00:06:15.547 00:06:15.547 real 0m5.366s 00:06:15.547 user 0m5.151s 00:06:15.547 sys 0m0.256s 00:06:15.547 12:50:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.547 12:50:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.547 ************************************ 00:06:15.547 END TEST skip_rpc 00:06:15.547 ************************************ 00:06:15.547 12:50:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:15.547 12:50:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.547 12:50:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.547 12:50:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.547 ************************************ 00:06:15.547 START TEST skip_rpc_with_json 00:06:15.547 ************************************ 00:06:15.547 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:15.547 12:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:15.547 12:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1802219 00:06:15.547 12:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.547 12:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.547 12:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1802219 00:06:15.547 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1802219 ']' 00:06:15.547 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.547 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.547 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.547 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.547 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.547 [2024-11-29 12:50:15.308629] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:15.547 [2024-11-29 12:50:15.308671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1802219 ] 00:06:15.804 [2024-11-29 12:50:15.369550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.804 [2024-11-29 12:50:15.409893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.804 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.804 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:15.804 12:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:15.804 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.804 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.804 [2024-11-29 12:50:15.622146] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:16.062 request: 00:06:16.062 { 00:06:16.062 "trtype": "tcp", 00:06:16.062 "method": "nvmf_get_transports", 00:06:16.062 "req_id": 1 00:06:16.062 } 00:06:16.062 Got JSON-RPC error response 00:06:16.062 response: 00:06:16.062 { 00:06:16.062 "code": -19, 00:06:16.062 "message": "No such device" 00:06:16.062 } 00:06:16.062 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:16.062 12:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:16.062 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.062 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.062 [2024-11-29 12:50:15.634252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.062 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.062 12:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:16.062 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.062 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.062 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.062 12:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.062 { 00:06:16.062 "subsystems": [ 00:06:16.062 { 00:06:16.062 "subsystem": "fsdev", 00:06:16.062 "config": [ 00:06:16.062 { 00:06:16.062 "method": "fsdev_set_opts", 00:06:16.062 "params": { 00:06:16.062 "fsdev_io_pool_size": 65535, 00:06:16.062 "fsdev_io_cache_size": 256 00:06:16.062 } 00:06:16.062 } 00:06:16.062 ] 00:06:16.062 }, 00:06:16.062 { 00:06:16.062 "subsystem": "vfio_user_target", 00:06:16.062 "config": null 00:06:16.062 }, 00:06:16.062 { 00:06:16.062 "subsystem": "keyring", 00:06:16.062 "config": [] 00:06:16.062 }, 00:06:16.062 { 00:06:16.062 "subsystem": "iobuf", 00:06:16.062 "config": [ 00:06:16.062 { 00:06:16.062 "method": "iobuf_set_options", 00:06:16.062 "params": { 00:06:16.062 "small_pool_count": 8192, 00:06:16.062 "large_pool_count": 1024, 00:06:16.062 "small_bufsize": 8192, 00:06:16.062 "large_bufsize": 135168, 00:06:16.062 "enable_numa": false 00:06:16.062 } 00:06:16.062 } 00:06:16.062 ] 00:06:16.062 }, 00:06:16.062 { 00:06:16.062 "subsystem": "sock", 00:06:16.062 "config": [ 00:06:16.062 { 00:06:16.062 "method": "sock_set_default_impl", 00:06:16.062 "params": { 00:06:16.062 "impl_name": "posix" 00:06:16.062 } 00:06:16.062 }, 00:06:16.062 { 00:06:16.062 "method": "sock_impl_set_options", 00:06:16.062 "params": { 00:06:16.062 "impl_name": "ssl", 00:06:16.062 "recv_buf_size": 4096, 00:06:16.062 "send_buf_size": 4096, 00:06:16.062 "enable_recv_pipe": true, 00:06:16.062 "enable_quickack": false, 00:06:16.062 "enable_placement_id": 0, 00:06:16.062 "enable_zerocopy_send_server": true, 00:06:16.062 "enable_zerocopy_send_client": false, 00:06:16.062 "zerocopy_threshold": 0, 00:06:16.062 "tls_version": 0, 00:06:16.062 "enable_ktls": false 00:06:16.062 } 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "method": "sock_impl_set_options", 00:06:16.063 "params": { 00:06:16.063 "impl_name": "posix", 00:06:16.063 "recv_buf_size": 2097152, 00:06:16.063 "send_buf_size": 2097152, 00:06:16.063 "enable_recv_pipe": true, 00:06:16.063 "enable_quickack": false, 00:06:16.063 "enable_placement_id": 0, 00:06:16.063 "enable_zerocopy_send_server": true, 00:06:16.063 "enable_zerocopy_send_client": false, 00:06:16.063 "zerocopy_threshold": 0, 00:06:16.063 "tls_version": 0, 00:06:16.063 "enable_ktls": false 00:06:16.063 } 00:06:16.063 } 00:06:16.063 ] 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "subsystem": "vmd", 00:06:16.063 "config": [] 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "subsystem": "accel", 00:06:16.063 "config": [ 00:06:16.063 { 00:06:16.063 "method": "accel_set_options", 00:06:16.063 "params": { 00:06:16.063 "small_cache_size": 128, 00:06:16.063 "large_cache_size": 16, 00:06:16.063 "task_count": 2048, 00:06:16.063 "sequence_count": 2048, 00:06:16.063 "buf_count": 2048 00:06:16.063 } 00:06:16.063 } 00:06:16.063 ] 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "subsystem": "bdev", 00:06:16.063 "config": [ 00:06:16.063 { 00:06:16.063 "method": "bdev_set_options", 00:06:16.063 "params": { 00:06:16.063 "bdev_io_pool_size": 65535, 00:06:16.063 "bdev_io_cache_size": 256, 00:06:16.063 "bdev_auto_examine": true, 00:06:16.063 "iobuf_small_cache_size": 128, 00:06:16.063 "iobuf_large_cache_size": 16 00:06:16.063 } 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "method": "bdev_raid_set_options", 00:06:16.063 "params": { 00:06:16.063 "process_window_size_kb": 1024, 00:06:16.063 "process_max_bandwidth_mb_sec": 0 00:06:16.063 } 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "method": "bdev_iscsi_set_options", 00:06:16.063 "params": { 00:06:16.063 "timeout_sec": 30 00:06:16.063 } 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "method": "bdev_nvme_set_options", 00:06:16.063 "params": { 00:06:16.063 "action_on_timeout": "none", 00:06:16.063 "timeout_us": 0, 00:06:16.063 "timeout_admin_us": 0, 00:06:16.063 "keep_alive_timeout_ms": 10000, 00:06:16.063 "arbitration_burst": 0, 00:06:16.063 "low_priority_weight": 0, 00:06:16.063 "medium_priority_weight": 0, 00:06:16.063 "high_priority_weight": 0, 00:06:16.063 "nvme_adminq_poll_period_us": 10000, 00:06:16.063 "nvme_ioq_poll_period_us": 0, 00:06:16.063 "io_queue_requests": 0, 00:06:16.063 "delay_cmd_submit": true, 00:06:16.063 "transport_retry_count": 4, 00:06:16.063 "bdev_retry_count": 3, 00:06:16.063 "transport_ack_timeout": 0, 00:06:16.063 "ctrlr_loss_timeout_sec": 0, 00:06:16.063 "reconnect_delay_sec": 0, 00:06:16.063 "fast_io_fail_timeout_sec": 0, 00:06:16.063 "disable_auto_failback": false, 00:06:16.063 "generate_uuids": false, 00:06:16.063 "transport_tos": 0, 00:06:16.063 "nvme_error_stat": false, 00:06:16.063 "rdma_srq_size": 0, 00:06:16.063 "io_path_stat": false, 00:06:16.063 "allow_accel_sequence": false, 00:06:16.063 "rdma_max_cq_size": 0, 00:06:16.063 "rdma_cm_event_timeout_ms": 0, 00:06:16.063 "dhchap_digests": [ 00:06:16.063 "sha256", 00:06:16.063 "sha384", 00:06:16.063 "sha512" 00:06:16.063 ], 00:06:16.063 "dhchap_dhgroups": [ 00:06:16.063 "null", 00:06:16.063 "ffdhe2048", 00:06:16.063 "ffdhe3072", 00:06:16.063 "ffdhe4096", 00:06:16.063 "ffdhe6144", 00:06:16.063 "ffdhe8192" 00:06:16.063 ] 00:06:16.063 } 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "method": "bdev_nvme_set_hotplug", 00:06:16.063 "params": { 00:06:16.063 "period_us": 100000, 00:06:16.063 "enable": false 00:06:16.063 } 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "method": "bdev_wait_for_examine" 00:06:16.063 } 00:06:16.063 ] 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "subsystem": "scsi", 00:06:16.063 "config": null 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "subsystem": "scheduler", 00:06:16.063 "config": [ 00:06:16.063 { 00:06:16.063 "method": "framework_set_scheduler", 00:06:16.063 "params": { 00:06:16.063 "name": "static" 00:06:16.063 } 00:06:16.063 } 00:06:16.063 ] 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "subsystem": "vhost_scsi", 00:06:16.063 "config": [] 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "subsystem": "vhost_blk", 00:06:16.063 "config": [] 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "subsystem": "ublk", 00:06:16.063 "config": [] 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "subsystem": "nbd", 00:06:16.063 "config": [] 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "subsystem": "nvmf", 00:06:16.063 "config": [ 00:06:16.063 { 00:06:16.063 "method": "nvmf_set_config", 00:06:16.063 "params": { 00:06:16.063 "discovery_filter": "match_any", 00:06:16.063 "admin_cmd_passthru": { 00:06:16.063 "identify_ctrlr": false 00:06:16.063 }, 00:06:16.063 "dhchap_digests": [ 00:06:16.063 "sha256", 00:06:16.063 "sha384", 00:06:16.063 "sha512" 00:06:16.063 ], 00:06:16.063 "dhchap_dhgroups": [ 00:06:16.063 "null", 00:06:16.063 "ffdhe2048", 00:06:16.063 "ffdhe3072", 00:06:16.063 "ffdhe4096", 00:06:16.063 "ffdhe6144", 00:06:16.063 "ffdhe8192" 00:06:16.063 ] 00:06:16.063 } 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "method": "nvmf_set_max_subsystems", 00:06:16.063 "params": { 00:06:16.063 "max_subsystems": 1024 00:06:16.063 } 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "method": "nvmf_set_crdt", 00:06:16.063 "params": { 00:06:16.063 "crdt1": 0, 00:06:16.063 "crdt2": 0, 00:06:16.063 "crdt3": 0 00:06:16.063 } 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "method": "nvmf_create_transport", 00:06:16.063 "params": { 00:06:16.063 "trtype": "TCP", 00:06:16.063 "max_queue_depth": 128, 00:06:16.063 "max_io_qpairs_per_ctrlr": 127, 00:06:16.063 "in_capsule_data_size": 4096, 00:06:16.063 "max_io_size": 131072, 00:06:16.063 "io_unit_size": 131072, 00:06:16.063 "max_aq_depth": 128, 00:06:16.063 "num_shared_buffers": 511, 00:06:16.063 "buf_cache_size": 4294967295, 00:06:16.063 "dif_insert_or_strip": false, 00:06:16.063 "zcopy": false, 00:06:16.063 "c2h_success": true, 00:06:16.063 "sock_priority": 0, 00:06:16.063 "abort_timeout_sec": 1, 00:06:16.063 "ack_timeout": 0, 00:06:16.063 "data_wr_pool_size": 0 00:06:16.063 } 00:06:16.063 } 00:06:16.063 ] 00:06:16.063 }, 00:06:16.063 { 00:06:16.063 "subsystem": "iscsi", 00:06:16.063 "config": [ 00:06:16.063 { 00:06:16.063 "method": "iscsi_set_options", 00:06:16.063 "params": { 00:06:16.063 "node_base": "iqn.2016-06.io.spdk", 00:06:16.063 "max_sessions": 128, 00:06:16.063 "max_connections_per_session": 2, 00:06:16.063 "max_queue_depth": 64, 00:06:16.063 "default_time2wait": 2, 00:06:16.063 "default_time2retain": 20, 00:06:16.063 "first_burst_length": 8192, 00:06:16.063 "immediate_data": true, 00:06:16.063 "allow_duplicated_isid": false, 00:06:16.063 "error_recovery_level": 0, 00:06:16.063 "nop_timeout": 60, 00:06:16.063 "nop_in_interval": 30, 00:06:16.063 "disable_chap": false, 00:06:16.063 "require_chap": false, 00:06:16.063 "mutual_chap": false, 00:06:16.063 "chap_group": 0, 00:06:16.063 "max_large_datain_per_connection": 64, 00:06:16.063 "max_r2t_per_connection": 4, 00:06:16.063 "pdu_pool_size": 36864, 00:06:16.063 "immediate_data_pool_size": 16384, 00:06:16.063 "data_out_pool_size": 2048 00:06:16.063 } 00:06:16.063 } 00:06:16.063 ] 00:06:16.063 } 00:06:16.063 ] 00:06:16.063 } 00:06:16.063 12:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:16.063 12:50:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1802219 00:06:16.063 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1802219 ']' 00:06:16.063 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1802219 00:06:16.063 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:16.063 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.063 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1802219 00:06:16.063 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.063 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.063 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1802219' 00:06:16.063 killing process with pid 1802219 00:06:16.063 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1802219 00:06:16.063 12:50:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1802219 00:06:16.632 12:50:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1802281 00:06:16.632 12:50:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.632 12:50:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1802281 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1802281 ']' 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1802281 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1802281 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1802281' 00:06:21.902 killing process with pid 1802281 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1802281 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1802281 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:21.902 00:06:21.902 real 0m6.271s 00:06:21.902 user 0m5.987s 00:06:21.902 sys 0m0.582s 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.902 12:50:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:21.902 ************************************ 00:06:21.902 END TEST skip_rpc_with_json 00:06:21.902 ************************************ 00:06:21.902 12:50:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:21.902 12:50:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.903 12:50:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.903 12:50:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.903 ************************************ 00:06:21.903 START TEST skip_rpc_with_delay 00:06:21.903 ************************************ 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:21.903 [2024-11-29 12:50:21.646758] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.903 00:06:21.903 real 0m0.069s 00:06:21.903 user 0m0.046s 00:06:21.903 sys 0m0.023s 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.903 12:50:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:21.903 ************************************ 00:06:21.903 END TEST skip_rpc_with_delay 00:06:21.903 ************************************ 00:06:21.903 12:50:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:21.903 12:50:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:21.903 12:50:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:21.903 12:50:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.903 12:50:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.903 12:50:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.162 ************************************ 00:06:22.162 START TEST exit_on_failed_rpc_init 00:06:22.162 ************************************ 00:06:22.162 12:50:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:22.162 12:50:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1803256 00:06:22.162 12:50:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1803256 00:06:22.162 12:50:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.162 12:50:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1803256 ']' 00:06:22.162 12:50:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.162 12:50:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.162 12:50:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.162 12:50:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.162 12:50:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.162 [2024-11-29 12:50:21.782299] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:22.162 [2024-11-29 12:50:21.782342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803256 ] 00:06:22.162 [2024-11-29 12:50:21.845401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.162 [2024-11-29 12:50:21.888743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.422 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.422 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:22.422 12:50:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.422 12:50:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:22.422 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:22.423 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:22.423 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.423 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.423 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.423 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.423 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.423 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.423 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.423 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:22.423 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:22.423 [2024-11-29 12:50:22.165123] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:22.423 [2024-11-29 12:50:22.165172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803479 ] 00:06:22.423 [2024-11-29 12:50:22.226727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.682 [2024-11-29 12:50:22.268611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.682 [2024-11-29 12:50:22.268662] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:22.682 [2024-11-29 12:50:22.268687] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:22.682 [2024-11-29 12:50:22.268696] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1803256 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1803256 ']' 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1803256 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1803256 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1803256' 00:06:22.682 killing process with pid 1803256 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1803256 00:06:22.682 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1803256 00:06:22.942 00:06:22.942 real 0m0.937s 00:06:22.942 user 0m1.016s 00:06:22.942 sys 0m0.377s 00:06:22.942 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.942 12:50:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.942 ************************************ 00:06:22.942 END TEST exit_on_failed_rpc_init 00:06:22.942 ************************************ 00:06:22.942 12:50:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:22.942 00:06:22.942 real 0m13.093s 00:06:22.942 user 0m12.419s 00:06:22.942 sys 0m1.492s 00:06:22.942 12:50:22 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.942 12:50:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.942 ************************************ 00:06:22.942 END TEST skip_rpc 00:06:22.942 ************************************ 00:06:22.942 12:50:22 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:22.942 12:50:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.942 12:50:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.942 12:50:22 -- common/autotest_common.sh@10 -- # set +x 00:06:23.201 ************************************ 00:06:23.201 START TEST rpc_client 00:06:23.201 ************************************ 00:06:23.201 12:50:22 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:23.201 * Looking for test storage... 00:06:23.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:23.201 12:50:22 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.201 12:50:22 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.201 12:50:22 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.201 12:50:22 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:23.201 12:50:22 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:23.202 12:50:22 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.202 12:50:22 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:23.202 12:50:22 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.202 12:50:22 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.202 12:50:22 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.202 12:50:22 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:23.202 12:50:22 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.202 12:50:22 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.202 --rc genhtml_branch_coverage=1 00:06:23.202 --rc genhtml_function_coverage=1 00:06:23.202 --rc genhtml_legend=1 00:06:23.202 --rc geninfo_all_blocks=1 00:06:23.202 --rc geninfo_unexecuted_blocks=1 00:06:23.202 00:06:23.202 ' 00:06:23.202 12:50:22 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.202 --rc genhtml_branch_coverage=1 00:06:23.202 --rc genhtml_function_coverage=1 00:06:23.202 --rc genhtml_legend=1 00:06:23.202 --rc geninfo_all_blocks=1 00:06:23.202 --rc geninfo_unexecuted_blocks=1 00:06:23.202 00:06:23.202 ' 00:06:23.202 12:50:22 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.202 --rc genhtml_branch_coverage=1 00:06:23.202 --rc genhtml_function_coverage=1 00:06:23.202 --rc genhtml_legend=1 00:06:23.202 --rc geninfo_all_blocks=1 00:06:23.202 --rc geninfo_unexecuted_blocks=1 00:06:23.202 00:06:23.202 ' 00:06:23.202 12:50:22 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.202 --rc genhtml_branch_coverage=1 00:06:23.202 --rc genhtml_function_coverage=1 00:06:23.202 --rc genhtml_legend=1 00:06:23.202 --rc geninfo_all_blocks=1 00:06:23.202 --rc geninfo_unexecuted_blocks=1 00:06:23.202 00:06:23.202 ' 00:06:23.202 12:50:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:23.202 OK 00:06:23.202 12:50:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:23.202 00:06:23.202 real 0m0.201s 00:06:23.202 user 0m0.110s 00:06:23.202 sys 0m0.101s 00:06:23.202 12:50:22 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.202 12:50:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:23.202 ************************************ 00:06:23.202 END TEST rpc_client 00:06:23.202 ************************************ 00:06:23.202 12:50:23 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:23.202 12:50:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.202 12:50:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.202 12:50:23 -- common/autotest_common.sh@10 -- # set +x 00:06:23.461 ************************************ 00:06:23.461 START TEST json_config 00:06:23.461 ************************************ 00:06:23.461 12:50:23 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:23.461 12:50:23 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.461 12:50:23 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.461 12:50:23 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.461 12:50:23 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.461 12:50:23 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.461 12:50:23 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.461 12:50:23 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.461 12:50:23 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.461 12:50:23 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.461 12:50:23 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.461 12:50:23 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.461 12:50:23 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.461 12:50:23 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.461 12:50:23 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.461 12:50:23 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.461 12:50:23 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:23.461 12:50:23 json_config -- scripts/common.sh@345 -- # : 1 00:06:23.461 12:50:23 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.461 12:50:23 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.461 12:50:23 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:23.461 12:50:23 json_config -- scripts/common.sh@353 -- # local d=1 00:06:23.461 12:50:23 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.461 12:50:23 json_config -- scripts/common.sh@355 -- # echo 1 00:06:23.461 12:50:23 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.461 12:50:23 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:23.461 12:50:23 json_config -- scripts/common.sh@353 -- # local d=2 00:06:23.461 12:50:23 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.461 12:50:23 json_config -- scripts/common.sh@355 -- # echo 2 00:06:23.461 12:50:23 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.461 12:50:23 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.462 12:50:23 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.462 12:50:23 json_config -- scripts/common.sh@368 -- # return 0 00:06:23.462 12:50:23 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.462 12:50:23 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.462 --rc genhtml_branch_coverage=1 00:06:23.462 --rc genhtml_function_coverage=1 00:06:23.462 --rc genhtml_legend=1 00:06:23.462 --rc geninfo_all_blocks=1 00:06:23.462 --rc geninfo_unexecuted_blocks=1 00:06:23.462 00:06:23.462 ' 00:06:23.462 12:50:23 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.462 --rc genhtml_branch_coverage=1 00:06:23.462 --rc genhtml_function_coverage=1 00:06:23.462 --rc genhtml_legend=1 00:06:23.462 --rc geninfo_all_blocks=1 00:06:23.462 --rc geninfo_unexecuted_blocks=1 00:06:23.462 00:06:23.462 ' 00:06:23.462 12:50:23 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.462 --rc genhtml_branch_coverage=1 00:06:23.462 --rc genhtml_function_coverage=1 00:06:23.462 --rc genhtml_legend=1 00:06:23.462 --rc geninfo_all_blocks=1 00:06:23.462 --rc geninfo_unexecuted_blocks=1 00:06:23.462 00:06:23.462 ' 00:06:23.462 12:50:23 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.462 --rc genhtml_branch_coverage=1 00:06:23.462 --rc genhtml_function_coverage=1 00:06:23.462 --rc genhtml_legend=1 00:06:23.462 --rc geninfo_all_blocks=1 00:06:23.462 --rc geninfo_unexecuted_blocks=1 00:06:23.462 00:06:23.462 ' 00:06:23.462 12:50:23 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.462 12:50:23 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.462 12:50:23 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.462 12:50:23 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.462 12:50:23 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.462 12:50:23 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.462 12:50:23 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.462 12:50:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.462 12:50:23 json_config -- paths/export.sh@5 -- # export PATH 00:06:23.462 12:50:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@51 -- # : 0 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.462 12:50:23 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.462 12:50:23 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:23.462 12:50:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:23.462 12:50:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:23.462 12:50:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:23.462 12:50:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:23.462 12:50:23 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:23.462 12:50:23 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:23.462 12:50:23 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:23.462 12:50:23 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:23.462 12:50:23 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:23.462 12:50:23 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:23.463 12:50:23 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:23.463 12:50:23 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:23.463 12:50:23 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:23.463 12:50:23 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:23.463 12:50:23 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:23.463 INFO: JSON configuration test init 00:06:23.463 12:50:23 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:23.463 12:50:23 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:23.463 12:50:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.463 12:50:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.463 12:50:23 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:23.463 12:50:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.463 12:50:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.463 12:50:23 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:23.463 12:50:23 json_config -- json_config/common.sh@9 -- # local app=target 00:06:23.463 12:50:23 json_config -- json_config/common.sh@10 -- # shift 00:06:23.463 12:50:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:23.463 12:50:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:23.463 12:50:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:23.463 12:50:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.463 12:50:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.463 12:50:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1803726 00:06:23.463 12:50:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:23.463 Waiting for target to run... 00:06:23.463 12:50:23 json_config -- json_config/common.sh@25 -- # waitforlisten 1803726 /var/tmp/spdk_tgt.sock 00:06:23.463 12:50:23 json_config -- common/autotest_common.sh@835 -- # '[' -z 1803726 ']' 00:06:23.463 12:50:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:23.463 12:50:23 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:23.463 12:50:23 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.463 12:50:23 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:23.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:23.463 12:50:23 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.463 12:50:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.721 [2024-11-29 12:50:23.298514] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:23.721 [2024-11-29 12:50:23.298564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803726 ] 00:06:23.980 [2024-11-29 12:50:23.739444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.980 [2024-11-29 12:50:23.797038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.545 12:50:24 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.545 12:50:24 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:24.545 12:50:24 json_config -- json_config/common.sh@26 -- # echo '' 00:06:24.545 00:06:24.545 12:50:24 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:24.545 12:50:24 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:24.545 12:50:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.545 12:50:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.545 12:50:24 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:24.545 12:50:24 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:24.545 12:50:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.545 12:50:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.545 12:50:24 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:24.545 12:50:24 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:24.545 12:50:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:27.829 12:50:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.829 12:50:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:27.829 12:50:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@54 -- # sort 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:27.829 12:50:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:27.829 12:50:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:27.829 12:50:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.829 12:50:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:27.829 12:50:27 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:27.829 12:50:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:28.088 MallocForNvmf0 00:06:28.088 12:50:27 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:28.088 12:50:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:28.088 MallocForNvmf1 00:06:28.088 12:50:27 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:28.088 12:50:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:28.347 [2024-11-29 12:50:28.059075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.347 12:50:28 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:28.347 12:50:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:28.606 12:50:28 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:28.606 12:50:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:28.865 12:50:28 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:28.865 12:50:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:28.865 12:50:28 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:28.865 12:50:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:29.124 [2024-11-29 12:50:28.821455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:29.124 12:50:28 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:29.124 12:50:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.124 12:50:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.124 12:50:28 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:29.124 12:50:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.124 12:50:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.124 12:50:28 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:29.124 12:50:28 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:29.124 12:50:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:29.384 MallocBdevForConfigChangeCheck 00:06:29.384 12:50:29 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:29.384 12:50:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:29.384 12:50:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.384 12:50:29 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:29.384 12:50:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:29.643 12:50:29 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:29.643 INFO: shutting down applications... 00:06:29.643 12:50:29 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:29.643 12:50:29 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:29.643 12:50:29 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:29.643 12:50:29 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:31.549 Calling clear_iscsi_subsystem 00:06:31.549 Calling clear_nvmf_subsystem 00:06:31.549 Calling clear_nbd_subsystem 00:06:31.549 Calling clear_ublk_subsystem 00:06:31.549 Calling clear_vhost_blk_subsystem 00:06:31.549 Calling clear_vhost_scsi_subsystem 00:06:31.549 Calling clear_bdev_subsystem 00:06:31.549 12:50:31 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:31.549 12:50:31 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:31.549 12:50:31 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:31.549 12:50:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:31.549 12:50:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:31.549 12:50:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:31.549 12:50:31 json_config -- json_config/json_config.sh@352 -- # break 00:06:31.549 12:50:31 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:31.549 12:50:31 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:31.549 12:50:31 json_config -- json_config/common.sh@31 -- # local app=target 00:06:31.549 12:50:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:31.549 12:50:31 json_config -- json_config/common.sh@35 -- # [[ -n 1803726 ]] 00:06:31.549 12:50:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1803726 00:06:31.549 12:50:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:31.549 12:50:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:31.549 12:50:31 json_config -- json_config/common.sh@41 -- # kill -0 1803726 00:06:31.549 12:50:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:32.118 12:50:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:32.118 12:50:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.118 12:50:31 json_config -- json_config/common.sh@41 -- # kill -0 1803726 00:06:32.118 12:50:31 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:32.118 12:50:31 json_config -- json_config/common.sh@43 -- # break 00:06:32.118 12:50:31 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:32.118 12:50:31 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:32.118 SPDK target shutdown done 00:06:32.118 12:50:31 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:32.118 INFO: relaunching applications... 00:06:32.118 12:50:31 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:32.118 12:50:31 json_config -- json_config/common.sh@9 -- # local app=target 00:06:32.118 12:50:31 json_config -- json_config/common.sh@10 -- # shift 00:06:32.118 12:50:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:32.118 12:50:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:32.118 12:50:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:32.118 12:50:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:32.118 12:50:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:32.118 12:50:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1805342 00:06:32.118 12:50:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:32.118 Waiting for target to run... 00:06:32.118 12:50:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:32.118 12:50:31 json_config -- json_config/common.sh@25 -- # waitforlisten 1805342 /var/tmp/spdk_tgt.sock 00:06:32.118 12:50:31 json_config -- common/autotest_common.sh@835 -- # '[' -z 1805342 ']' 00:06:32.118 12:50:31 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:32.118 12:50:31 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.118 12:50:31 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:32.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:32.118 12:50:31 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.118 12:50:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.118 [2024-11-29 12:50:31.928179] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:32.118 [2024-11-29 12:50:31.928238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1805342 ] 00:06:32.686 [2024-11-29 12:50:32.371959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.686 [2024-11-29 12:50:32.429530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.977 [2024-11-29 12:50:35.460972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.977 [2024-11-29 12:50:35.493313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:36.546 12:50:36 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.546 12:50:36 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:36.546 12:50:36 json_config -- json_config/common.sh@26 -- # echo '' 00:06:36.546 00:06:36.546 12:50:36 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:36.546 12:50:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:36.546 INFO: Checking if target configuration is the same... 00:06:36.547 12:50:36 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.547 12:50:36 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:36.547 12:50:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:36.547 + '[' 2 -ne 2 ']' 00:06:36.547 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:36.547 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:36.547 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:36.547 +++ basename /dev/fd/62 00:06:36.547 ++ mktemp /tmp/62.XXX 00:06:36.547 + tmp_file_1=/tmp/62.ytA 00:06:36.547 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.547 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:36.547 + tmp_file_2=/tmp/spdk_tgt_config.json.CTW 00:06:36.547 + ret=0 00:06:36.547 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:36.806 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:36.806 + diff -u /tmp/62.ytA /tmp/spdk_tgt_config.json.CTW 00:06:36.806 + echo 'INFO: JSON config files are the same' 00:06:36.806 INFO: JSON config files are the same 00:06:36.806 + rm /tmp/62.ytA /tmp/spdk_tgt_config.json.CTW 00:06:36.806 + exit 0 00:06:36.806 12:50:36 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:36.806 12:50:36 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:36.806 INFO: changing configuration and checking if this can be detected... 00:06:36.806 12:50:36 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:36.806 12:50:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:37.065 12:50:36 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:37.065 12:50:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:37.065 12:50:36 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.065 + '[' 2 -ne 2 ']' 00:06:37.065 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:37.065 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:37.065 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:37.065 +++ basename /dev/fd/62 00:06:37.065 ++ mktemp /tmp/62.XXX 00:06:37.065 + tmp_file_1=/tmp/62.eoN 00:06:37.065 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.065 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:37.065 + tmp_file_2=/tmp/spdk_tgt_config.json.7b9 00:06:37.065 + ret=0 00:06:37.065 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:37.324 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:37.324 + diff -u /tmp/62.eoN /tmp/spdk_tgt_config.json.7b9 00:06:37.324 + ret=1 00:06:37.324 + echo '=== Start of file: /tmp/62.eoN ===' 00:06:37.324 + cat /tmp/62.eoN 00:06:37.324 + echo '=== End of file: /tmp/62.eoN ===' 00:06:37.324 + echo '' 00:06:37.324 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7b9 ===' 00:06:37.324 + cat /tmp/spdk_tgt_config.json.7b9 00:06:37.324 + echo '=== End of file: /tmp/spdk_tgt_config.json.7b9 ===' 00:06:37.324 + echo '' 00:06:37.324 + rm /tmp/62.eoN /tmp/spdk_tgt_config.json.7b9 00:06:37.324 + exit 1 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:37.324 INFO: configuration change detected. 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:37.324 12:50:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.324 12:50:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@324 -- # [[ -n 1805342 ]] 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:37.324 12:50:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.324 12:50:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:37.324 12:50:37 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:37.324 12:50:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.324 12:50:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.583 12:50:37 json_config -- json_config/json_config.sh@330 -- # killprocess 1805342 00:06:37.583 12:50:37 json_config -- common/autotest_common.sh@954 -- # '[' -z 1805342 ']' 00:06:37.583 12:50:37 json_config -- common/autotest_common.sh@958 -- # kill -0 1805342 00:06:37.583 12:50:37 json_config -- common/autotest_common.sh@959 -- # uname 00:06:37.583 12:50:37 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.583 12:50:37 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1805342 00:06:37.583 12:50:37 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.583 12:50:37 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.583 12:50:37 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1805342' 00:06:37.583 killing process with pid 1805342 00:06:37.583 12:50:37 json_config -- common/autotest_common.sh@973 -- # kill 1805342 00:06:37.583 12:50:37 json_config -- common/autotest_common.sh@978 -- # wait 1805342 00:06:38.960 12:50:38 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.960 12:50:38 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:38.960 12:50:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.960 12:50:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.960 12:50:38 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:38.960 12:50:38 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:38.960 INFO: Success 00:06:38.960 00:06:38.960 real 0m15.673s 00:06:38.960 user 0m15.967s 00:06:38.960 sys 0m2.720s 00:06:38.960 12:50:38 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.960 12:50:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.960 ************************************ 00:06:38.960 END TEST json_config 00:06:38.960 ************************************ 00:06:38.960 12:50:38 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:38.960 12:50:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.960 12:50:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.960 12:50:38 -- common/autotest_common.sh@10 -- # set +x 00:06:39.220 ************************************ 00:06:39.220 START TEST json_config_extra_key 00:06:39.220 ************************************ 00:06:39.220 12:50:38 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:39.220 12:50:38 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.220 12:50:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.220 12:50:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.220 12:50:38 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.220 12:50:38 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:39.220 12:50:38 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.220 12:50:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.220 --rc genhtml_branch_coverage=1 00:06:39.220 --rc genhtml_function_coverage=1 00:06:39.220 --rc genhtml_legend=1 00:06:39.220 --rc geninfo_all_blocks=1 00:06:39.220 --rc geninfo_unexecuted_blocks=1 00:06:39.220 00:06:39.220 ' 00:06:39.220 12:50:38 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.220 --rc genhtml_branch_coverage=1 00:06:39.220 --rc genhtml_function_coverage=1 00:06:39.220 --rc genhtml_legend=1 00:06:39.220 --rc geninfo_all_blocks=1 00:06:39.220 --rc geninfo_unexecuted_blocks=1 00:06:39.220 00:06:39.220 ' 00:06:39.220 12:50:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.220 --rc genhtml_branch_coverage=1 00:06:39.220 --rc genhtml_function_coverage=1 00:06:39.220 --rc genhtml_legend=1 00:06:39.220 --rc geninfo_all_blocks=1 00:06:39.220 --rc geninfo_unexecuted_blocks=1 00:06:39.220 00:06:39.220 ' 00:06:39.220 12:50:38 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.220 --rc genhtml_branch_coverage=1 00:06:39.220 --rc genhtml_function_coverage=1 00:06:39.220 --rc genhtml_legend=1 00:06:39.220 --rc geninfo_all_blocks=1 00:06:39.221 --rc geninfo_unexecuted_blocks=1 00:06:39.221 00:06:39.221 ' 00:06:39.221 12:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:39.221 12:50:38 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.221 12:50:38 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.221 12:50:38 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.221 12:50:38 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.221 12:50:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.221 12:50:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.221 12:50:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.221 12:50:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:39.221 12:50:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:39.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:39.221 12:50:38 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:39.221 12:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:39.221 12:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:39.221 12:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:39.221 12:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:39.221 12:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:39.221 12:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:39.221 12:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:39.221 12:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:39.221 12:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:39.221 12:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:39.221 12:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:39.221 INFO: launching applications... 00:06:39.221 12:50:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:39.221 12:50:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:39.221 12:50:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:39.221 12:50:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:39.221 12:50:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:39.221 12:50:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:39.221 12:50:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:39.221 12:50:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:39.221 12:50:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1806626 00:06:39.221 12:50:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:39.221 Waiting for target to run... 00:06:39.221 12:50:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1806626 /var/tmp/spdk_tgt.sock 00:06:39.221 12:50:38 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1806626 ']' 00:06:39.221 12:50:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:39.221 12:50:38 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:39.221 12:50:38 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.221 12:50:38 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:39.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:39.221 12:50:38 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.221 12:50:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:39.221 [2024-11-29 12:50:39.028170] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:39.221 [2024-11-29 12:50:39.028216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806626 ] 00:06:39.790 [2024-11-29 12:50:39.460401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.790 [2024-11-29 12:50:39.512763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.049 12:50:39 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.049 12:50:39 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:40.049 12:50:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:40.049 00:06:40.049 12:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:40.049 INFO: shutting down applications... 00:06:40.049 12:50:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:40.049 12:50:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:40.049 12:50:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:40.049 12:50:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1806626 ]] 00:06:40.049 12:50:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1806626 00:06:40.049 12:50:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:40.049 12:50:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:40.049 12:50:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1806626 00:06:40.049 12:50:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:40.618 12:50:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:40.618 12:50:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:40.618 12:50:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1806626 00:06:40.618 12:50:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:40.618 12:50:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:40.618 12:50:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:40.618 12:50:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:40.618 SPDK target shutdown done 00:06:40.618 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:40.618 Success 00:06:40.618 00:06:40.618 real 0m1.561s 00:06:40.618 user 0m1.198s 00:06:40.618 sys 0m0.538s 00:06:40.618 12:50:40 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.618 12:50:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:40.618 ************************************ 00:06:40.618 END TEST json_config_extra_key 00:06:40.618 ************************************ 00:06:40.618 12:50:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:40.618 12:50:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.618 12:50:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.618 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:40.618 ************************************ 00:06:40.618 START TEST alias_rpc 00:06:40.618 ************************************ 00:06:40.618 12:50:40 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:40.878 * Looking for test storage... 00:06:40.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.878 12:50:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.878 --rc genhtml_branch_coverage=1 00:06:40.878 --rc genhtml_function_coverage=1 00:06:40.878 --rc genhtml_legend=1 00:06:40.878 --rc geninfo_all_blocks=1 00:06:40.878 --rc geninfo_unexecuted_blocks=1 00:06:40.878 00:06:40.878 ' 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.878 --rc genhtml_branch_coverage=1 00:06:40.878 --rc genhtml_function_coverage=1 00:06:40.878 --rc genhtml_legend=1 00:06:40.878 --rc geninfo_all_blocks=1 00:06:40.878 --rc geninfo_unexecuted_blocks=1 00:06:40.878 00:06:40.878 ' 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.878 --rc genhtml_branch_coverage=1 00:06:40.878 --rc genhtml_function_coverage=1 00:06:40.878 --rc genhtml_legend=1 00:06:40.878 --rc geninfo_all_blocks=1 00:06:40.878 --rc geninfo_unexecuted_blocks=1 00:06:40.878 00:06:40.878 ' 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.878 --rc genhtml_branch_coverage=1 00:06:40.878 --rc genhtml_function_coverage=1 00:06:40.878 --rc genhtml_legend=1 00:06:40.878 --rc geninfo_all_blocks=1 00:06:40.878 --rc geninfo_unexecuted_blocks=1 00:06:40.878 00:06:40.878 ' 00:06:40.878 12:50:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.878 12:50:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1806920 00:06:40.878 12:50:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1806920 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1806920 ']' 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.878 12:50:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.878 12:50:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.878 [2024-11-29 12:50:40.647256] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:40.878 [2024-11-29 12:50:40.647301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806920 ] 00:06:41.138 [2024-11-29 12:50:40.712150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.138 [2024-11-29 12:50:40.755150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.397 12:50:40 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.397 12:50:40 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.397 12:50:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:41.397 12:50:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1806920 00:06:41.397 12:50:41 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1806920 ']' 00:06:41.397 12:50:41 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1806920 00:06:41.397 12:50:41 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:41.397 12:50:41 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.397 12:50:41 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1806920 00:06:41.657 12:50:41 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.657 12:50:41 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.657 12:50:41 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1806920' 00:06:41.657 killing process with pid 1806920 00:06:41.657 12:50:41 alias_rpc -- common/autotest_common.sh@973 -- # kill 1806920 00:06:41.657 12:50:41 alias_rpc -- common/autotest_common.sh@978 -- # wait 1806920 00:06:41.918 00:06:41.918 real 0m1.116s 00:06:41.918 user 0m1.135s 00:06:41.918 sys 0m0.393s 00:06:41.918 12:50:41 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.918 12:50:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.918 ************************************ 00:06:41.918 END TEST alias_rpc 00:06:41.918 ************************************ 00:06:41.918 12:50:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:41.918 12:50:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:41.918 12:50:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.918 12:50:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.918 12:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:41.918 ************************************ 00:06:41.918 START TEST spdkcli_tcp 00:06:41.918 ************************************ 00:06:41.918 12:50:41 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:41.918 * Looking for test storage... 00:06:41.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:41.918 12:50:41 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.919 12:50:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.919 12:50:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.178 12:50:41 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.178 12:50:41 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:42.178 12:50:41 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.178 12:50:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.178 --rc genhtml_branch_coverage=1 00:06:42.179 --rc genhtml_function_coverage=1 00:06:42.179 --rc genhtml_legend=1 00:06:42.179 --rc geninfo_all_blocks=1 00:06:42.179 --rc geninfo_unexecuted_blocks=1 00:06:42.179 00:06:42.179 ' 00:06:42.179 12:50:41 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.179 --rc genhtml_branch_coverage=1 00:06:42.179 --rc genhtml_function_coverage=1 00:06:42.179 --rc genhtml_legend=1 00:06:42.179 --rc geninfo_all_blocks=1 00:06:42.179 --rc geninfo_unexecuted_blocks=1 00:06:42.179 00:06:42.179 ' 00:06:42.179 12:50:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.179 --rc genhtml_branch_coverage=1 00:06:42.179 --rc genhtml_function_coverage=1 00:06:42.179 --rc genhtml_legend=1 00:06:42.179 --rc geninfo_all_blocks=1 00:06:42.179 --rc geninfo_unexecuted_blocks=1 00:06:42.179 00:06:42.179 ' 00:06:42.179 12:50:41 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.179 --rc genhtml_branch_coverage=1 00:06:42.179 --rc genhtml_function_coverage=1 00:06:42.179 --rc genhtml_legend=1 00:06:42.179 --rc geninfo_all_blocks=1 00:06:42.179 --rc geninfo_unexecuted_blocks=1 00:06:42.179 00:06:42.179 ' 00:06:42.179 12:50:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:42.179 12:50:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:42.179 12:50:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:42.179 12:50:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:42.179 12:50:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:42.179 12:50:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:42.179 12:50:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:42.179 12:50:41 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:42.179 12:50:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.179 12:50:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1807207 00:06:42.179 12:50:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1807207 00:06:42.179 12:50:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:42.179 12:50:41 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1807207 ']' 00:06:42.179 12:50:41 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.179 12:50:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.179 12:50:41 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.179 12:50:41 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.179 12:50:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.179 [2024-11-29 12:50:41.830881] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:42.179 [2024-11-29 12:50:41.830928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1807207 ] 00:06:42.179 [2024-11-29 12:50:41.892112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.179 [2024-11-29 12:50:41.933427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.179 [2024-11-29 12:50:41.933430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.438 12:50:42 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.438 12:50:42 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:42.438 12:50:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1807218 00:06:42.438 12:50:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:42.438 12:50:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:42.698 [ 00:06:42.698 "bdev_malloc_delete", 00:06:42.698 "bdev_malloc_create", 00:06:42.698 "bdev_null_resize", 00:06:42.698 "bdev_null_delete", 00:06:42.698 "bdev_null_create", 00:06:42.698 "bdev_nvme_cuse_unregister", 00:06:42.698 "bdev_nvme_cuse_register", 00:06:42.698 "bdev_opal_new_user", 00:06:42.698 "bdev_opal_set_lock_state", 00:06:42.698 "bdev_opal_delete", 00:06:42.698 "bdev_opal_get_info", 00:06:42.698 "bdev_opal_create", 00:06:42.698 "bdev_nvme_opal_revert", 00:06:42.698 "bdev_nvme_opal_init", 00:06:42.698 "bdev_nvme_send_cmd", 00:06:42.698 "bdev_nvme_set_keys", 00:06:42.698 "bdev_nvme_get_path_iostat", 00:06:42.698 "bdev_nvme_get_mdns_discovery_info", 00:06:42.698 "bdev_nvme_stop_mdns_discovery", 00:06:42.698 "bdev_nvme_start_mdns_discovery", 00:06:42.698 "bdev_nvme_set_multipath_policy", 00:06:42.698 "bdev_nvme_set_preferred_path", 00:06:42.698 "bdev_nvme_get_io_paths", 00:06:42.698 "bdev_nvme_remove_error_injection", 00:06:42.698 "bdev_nvme_add_error_injection", 00:06:42.698 "bdev_nvme_get_discovery_info", 00:06:42.698 "bdev_nvme_stop_discovery", 00:06:42.698 "bdev_nvme_start_discovery", 00:06:42.698 "bdev_nvme_get_controller_health_info", 00:06:42.698 "bdev_nvme_disable_controller", 00:06:42.698 "bdev_nvme_enable_controller", 00:06:42.698 "bdev_nvme_reset_controller", 00:06:42.698 "bdev_nvme_get_transport_statistics", 00:06:42.698 "bdev_nvme_apply_firmware", 00:06:42.698 "bdev_nvme_detach_controller", 00:06:42.698 "bdev_nvme_get_controllers", 00:06:42.698 "bdev_nvme_attach_controller", 00:06:42.698 "bdev_nvme_set_hotplug", 00:06:42.698 "bdev_nvme_set_options", 00:06:42.698 "bdev_passthru_delete", 00:06:42.698 "bdev_passthru_create", 00:06:42.698 "bdev_lvol_set_parent_bdev", 00:06:42.698 "bdev_lvol_set_parent", 00:06:42.698 "bdev_lvol_check_shallow_copy", 00:06:42.698 "bdev_lvol_start_shallow_copy", 00:06:42.698 "bdev_lvol_grow_lvstore", 00:06:42.698 "bdev_lvol_get_lvols", 00:06:42.698 "bdev_lvol_get_lvstores", 00:06:42.698 "bdev_lvol_delete", 00:06:42.698 "bdev_lvol_set_read_only", 00:06:42.698 "bdev_lvol_resize", 00:06:42.698 "bdev_lvol_decouple_parent", 00:06:42.698 "bdev_lvol_inflate", 00:06:42.698 "bdev_lvol_rename", 00:06:42.698 "bdev_lvol_clone_bdev", 00:06:42.698 "bdev_lvol_clone", 00:06:42.698 "bdev_lvol_snapshot", 00:06:42.698 "bdev_lvol_create", 00:06:42.698 "bdev_lvol_delete_lvstore", 00:06:42.698 "bdev_lvol_rename_lvstore", 00:06:42.698 "bdev_lvol_create_lvstore", 00:06:42.698 "bdev_raid_set_options", 00:06:42.698 "bdev_raid_remove_base_bdev", 00:06:42.698 "bdev_raid_add_base_bdev", 00:06:42.698 "bdev_raid_delete", 00:06:42.698 "bdev_raid_create", 00:06:42.698 "bdev_raid_get_bdevs", 00:06:42.698 "bdev_error_inject_error", 00:06:42.698 "bdev_error_delete", 00:06:42.698 "bdev_error_create", 00:06:42.698 "bdev_split_delete", 00:06:42.698 "bdev_split_create", 00:06:42.698 "bdev_delay_delete", 00:06:42.698 "bdev_delay_create", 00:06:42.698 "bdev_delay_update_latency", 00:06:42.698 "bdev_zone_block_delete", 00:06:42.698 "bdev_zone_block_create", 00:06:42.698 "blobfs_create", 00:06:42.698 "blobfs_detect", 00:06:42.698 "blobfs_set_cache_size", 00:06:42.698 "bdev_aio_delete", 00:06:42.698 "bdev_aio_rescan", 00:06:42.698 "bdev_aio_create", 00:06:42.698 "bdev_ftl_set_property", 00:06:42.698 "bdev_ftl_get_properties", 00:06:42.698 "bdev_ftl_get_stats", 00:06:42.698 "bdev_ftl_unmap", 00:06:42.698 "bdev_ftl_unload", 00:06:42.698 "bdev_ftl_delete", 00:06:42.698 "bdev_ftl_load", 00:06:42.698 "bdev_ftl_create", 00:06:42.698 "bdev_virtio_attach_controller", 00:06:42.698 "bdev_virtio_scsi_get_devices", 00:06:42.698 "bdev_virtio_detach_controller", 00:06:42.698 "bdev_virtio_blk_set_hotplug", 00:06:42.698 "bdev_iscsi_delete", 00:06:42.698 "bdev_iscsi_create", 00:06:42.698 "bdev_iscsi_set_options", 00:06:42.698 "accel_error_inject_error", 00:06:42.698 "ioat_scan_accel_module", 00:06:42.698 "dsa_scan_accel_module", 00:06:42.698 "iaa_scan_accel_module", 00:06:42.698 "vfu_virtio_create_fs_endpoint", 00:06:42.698 "vfu_virtio_create_scsi_endpoint", 00:06:42.698 "vfu_virtio_scsi_remove_target", 00:06:42.698 "vfu_virtio_scsi_add_target", 00:06:42.698 "vfu_virtio_create_blk_endpoint", 00:06:42.698 "vfu_virtio_delete_endpoint", 00:06:42.698 "keyring_file_remove_key", 00:06:42.698 "keyring_file_add_key", 00:06:42.698 "keyring_linux_set_options", 00:06:42.698 "fsdev_aio_delete", 00:06:42.698 "fsdev_aio_create", 00:06:42.698 "iscsi_get_histogram", 00:06:42.698 "iscsi_enable_histogram", 00:06:42.698 "iscsi_set_options", 00:06:42.698 "iscsi_get_auth_groups", 00:06:42.698 "iscsi_auth_group_remove_secret", 00:06:42.698 "iscsi_auth_group_add_secret", 00:06:42.698 "iscsi_delete_auth_group", 00:06:42.698 "iscsi_create_auth_group", 00:06:42.698 "iscsi_set_discovery_auth", 00:06:42.698 "iscsi_get_options", 00:06:42.698 "iscsi_target_node_request_logout", 00:06:42.698 "iscsi_target_node_set_redirect", 00:06:42.698 "iscsi_target_node_set_auth", 00:06:42.698 "iscsi_target_node_add_lun", 00:06:42.698 "iscsi_get_stats", 00:06:42.698 "iscsi_get_connections", 00:06:42.698 "iscsi_portal_group_set_auth", 00:06:42.698 "iscsi_start_portal_group", 00:06:42.698 "iscsi_delete_portal_group", 00:06:42.698 "iscsi_create_portal_group", 00:06:42.698 "iscsi_get_portal_groups", 00:06:42.698 "iscsi_delete_target_node", 00:06:42.698 "iscsi_target_node_remove_pg_ig_maps", 00:06:42.698 "iscsi_target_node_add_pg_ig_maps", 00:06:42.698 "iscsi_create_target_node", 00:06:42.698 "iscsi_get_target_nodes", 00:06:42.698 "iscsi_delete_initiator_group", 00:06:42.698 "iscsi_initiator_group_remove_initiators", 00:06:42.698 "iscsi_initiator_group_add_initiators", 00:06:42.698 "iscsi_create_initiator_group", 00:06:42.698 "iscsi_get_initiator_groups", 00:06:42.698 "nvmf_set_crdt", 00:06:42.698 "nvmf_set_config", 00:06:42.698 "nvmf_set_max_subsystems", 00:06:42.698 "nvmf_stop_mdns_prr", 00:06:42.698 "nvmf_publish_mdns_prr", 00:06:42.698 "nvmf_subsystem_get_listeners", 00:06:42.698 "nvmf_subsystem_get_qpairs", 00:06:42.698 "nvmf_subsystem_get_controllers", 00:06:42.698 "nvmf_get_stats", 00:06:42.698 "nvmf_get_transports", 00:06:42.698 "nvmf_create_transport", 00:06:42.698 "nvmf_get_targets", 00:06:42.698 "nvmf_delete_target", 00:06:42.698 "nvmf_create_target", 00:06:42.698 "nvmf_subsystem_allow_any_host", 00:06:42.698 "nvmf_subsystem_set_keys", 00:06:42.698 "nvmf_subsystem_remove_host", 00:06:42.698 "nvmf_subsystem_add_host", 00:06:42.698 "nvmf_ns_remove_host", 00:06:42.698 "nvmf_ns_add_host", 00:06:42.698 "nvmf_subsystem_remove_ns", 00:06:42.698 "nvmf_subsystem_set_ns_ana_group", 00:06:42.698 "nvmf_subsystem_add_ns", 00:06:42.698 "nvmf_subsystem_listener_set_ana_state", 00:06:42.698 "nvmf_discovery_get_referrals", 00:06:42.698 "nvmf_discovery_remove_referral", 00:06:42.698 "nvmf_discovery_add_referral", 00:06:42.698 "nvmf_subsystem_remove_listener", 00:06:42.698 "nvmf_subsystem_add_listener", 00:06:42.698 "nvmf_delete_subsystem", 00:06:42.698 "nvmf_create_subsystem", 00:06:42.698 "nvmf_get_subsystems", 00:06:42.698 "env_dpdk_get_mem_stats", 00:06:42.698 "nbd_get_disks", 00:06:42.698 "nbd_stop_disk", 00:06:42.698 "nbd_start_disk", 00:06:42.698 "ublk_recover_disk", 00:06:42.698 "ublk_get_disks", 00:06:42.698 "ublk_stop_disk", 00:06:42.698 "ublk_start_disk", 00:06:42.698 "ublk_destroy_target", 00:06:42.698 "ublk_create_target", 00:06:42.698 "virtio_blk_create_transport", 00:06:42.698 "virtio_blk_get_transports", 00:06:42.698 "vhost_controller_set_coalescing", 00:06:42.698 "vhost_get_controllers", 00:06:42.698 "vhost_delete_controller", 00:06:42.698 "vhost_create_blk_controller", 00:06:42.698 "vhost_scsi_controller_remove_target", 00:06:42.698 "vhost_scsi_controller_add_target", 00:06:42.698 "vhost_start_scsi_controller", 00:06:42.698 "vhost_create_scsi_controller", 00:06:42.698 "thread_set_cpumask", 00:06:42.698 "scheduler_set_options", 00:06:42.698 "framework_get_governor", 00:06:42.698 "framework_get_scheduler", 00:06:42.698 "framework_set_scheduler", 00:06:42.698 "framework_get_reactors", 00:06:42.698 "thread_get_io_channels", 00:06:42.698 "thread_get_pollers", 00:06:42.698 "thread_get_stats", 00:06:42.698 "framework_monitor_context_switch", 00:06:42.698 "spdk_kill_instance", 00:06:42.698 "log_enable_timestamps", 00:06:42.698 "log_get_flags", 00:06:42.698 "log_clear_flag", 00:06:42.698 "log_set_flag", 00:06:42.698 "log_get_level", 00:06:42.698 "log_set_level", 00:06:42.698 "log_get_print_level", 00:06:42.698 "log_set_print_level", 00:06:42.698 "framework_enable_cpumask_locks", 00:06:42.698 "framework_disable_cpumask_locks", 00:06:42.698 "framework_wait_init", 00:06:42.698 "framework_start_init", 00:06:42.698 "scsi_get_devices", 00:06:42.698 "bdev_get_histogram", 00:06:42.698 "bdev_enable_histogram", 00:06:42.698 "bdev_set_qos_limit", 00:06:42.698 "bdev_set_qd_sampling_period", 00:06:42.698 "bdev_get_bdevs", 00:06:42.698 "bdev_reset_iostat", 00:06:42.698 "bdev_get_iostat", 00:06:42.698 "bdev_examine", 00:06:42.698 "bdev_wait_for_examine", 00:06:42.698 "bdev_set_options", 00:06:42.699 "accel_get_stats", 00:06:42.699 "accel_set_options", 00:06:42.699 "accel_set_driver", 00:06:42.699 "accel_crypto_key_destroy", 00:06:42.699 "accel_crypto_keys_get", 00:06:42.699 "accel_crypto_key_create", 00:06:42.699 "accel_assign_opc", 00:06:42.699 "accel_get_module_info", 00:06:42.699 "accel_get_opc_assignments", 00:06:42.699 "vmd_rescan", 00:06:42.699 "vmd_remove_device", 00:06:42.699 "vmd_enable", 00:06:42.699 "sock_get_default_impl", 00:06:42.699 "sock_set_default_impl", 00:06:42.699 "sock_impl_set_options", 00:06:42.699 "sock_impl_get_options", 00:06:42.699 "iobuf_get_stats", 00:06:42.699 "iobuf_set_options", 00:06:42.699 "keyring_get_keys", 00:06:42.699 "vfu_tgt_set_base_path", 00:06:42.699 "framework_get_pci_devices", 00:06:42.699 "framework_get_config", 00:06:42.699 "framework_get_subsystems", 00:06:42.699 "fsdev_set_opts", 00:06:42.699 "fsdev_get_opts", 00:06:42.699 "trace_get_info", 00:06:42.699 "trace_get_tpoint_group_mask", 00:06:42.699 "trace_disable_tpoint_group", 00:06:42.699 "trace_enable_tpoint_group", 00:06:42.699 "trace_clear_tpoint_mask", 00:06:42.699 "trace_set_tpoint_mask", 00:06:42.699 "notify_get_notifications", 00:06:42.699 "notify_get_types", 00:06:42.699 "spdk_get_version", 00:06:42.699 "rpc_get_methods" 00:06:42.699 ] 00:06:42.699 12:50:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:42.699 12:50:42 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:42.699 12:50:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.699 12:50:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:42.699 12:50:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1807207 00:06:42.699 12:50:42 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1807207 ']' 00:06:42.699 12:50:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1807207 00:06:42.699 12:50:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:42.699 12:50:42 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.699 12:50:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1807207 00:06:42.699 12:50:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.699 12:50:42 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.699 12:50:42 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1807207' 00:06:42.699 killing process with pid 1807207 00:06:42.699 12:50:42 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1807207 00:06:42.699 12:50:42 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1807207 00:06:42.958 00:06:42.958 real 0m1.142s 00:06:42.958 user 0m1.936s 00:06:42.958 sys 0m0.427s 00:06:42.958 12:50:42 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.958 12:50:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.958 ************************************ 00:06:42.958 END TEST spdkcli_tcp 00:06:42.958 ************************************ 00:06:42.958 12:50:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:42.958 12:50:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.958 12:50:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.958 12:50:42 -- common/autotest_common.sh@10 -- # set +x 00:06:43.218 ************************************ 00:06:43.218 START TEST dpdk_mem_utility 00:06:43.218 ************************************ 00:06:43.218 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:43.218 * Looking for test storage... 00:06:43.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:43.218 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.218 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.218 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.218 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.218 12:50:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:43.218 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.218 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.218 --rc genhtml_branch_coverage=1 00:06:43.218 --rc genhtml_function_coverage=1 00:06:43.218 --rc genhtml_legend=1 00:06:43.218 --rc geninfo_all_blocks=1 00:06:43.218 --rc geninfo_unexecuted_blocks=1 00:06:43.218 00:06:43.218 ' 00:06:43.219 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.219 --rc genhtml_branch_coverage=1 00:06:43.219 --rc genhtml_function_coverage=1 00:06:43.219 --rc genhtml_legend=1 00:06:43.219 --rc geninfo_all_blocks=1 00:06:43.219 --rc geninfo_unexecuted_blocks=1 00:06:43.219 00:06:43.219 ' 00:06:43.219 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.219 --rc genhtml_branch_coverage=1 00:06:43.219 --rc genhtml_function_coverage=1 00:06:43.219 --rc genhtml_legend=1 00:06:43.219 --rc geninfo_all_blocks=1 00:06:43.219 --rc geninfo_unexecuted_blocks=1 00:06:43.219 00:06:43.219 ' 00:06:43.219 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.219 --rc genhtml_branch_coverage=1 00:06:43.219 --rc genhtml_function_coverage=1 00:06:43.219 --rc genhtml_legend=1 00:06:43.219 --rc geninfo_all_blocks=1 00:06:43.219 --rc geninfo_unexecuted_blocks=1 00:06:43.219 00:06:43.219 ' 00:06:43.219 12:50:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:43.219 12:50:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1807511 00:06:43.219 12:50:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:43.219 12:50:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1807511 00:06:43.219 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1807511 ']' 00:06:43.219 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.219 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.219 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.219 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.219 12:50:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:43.219 [2024-11-29 12:50:43.035709] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:43.219 [2024-11-29 12:50:43.035757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1807511 ] 00:06:43.479 [2024-11-29 12:50:43.098301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.479 [2024-11-29 12:50:43.141084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.739 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.739 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:43.739 12:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:43.739 12:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:43.739 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.739 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:43.739 { 00:06:43.739 "filename": "/tmp/spdk_mem_dump.txt" 00:06:43.739 } 00:06:43.739 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.739 12:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:43.739 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:43.739 1 heaps totaling size 818.000000 MiB 00:06:43.739 size: 818.000000 MiB heap id: 0 00:06:43.739 end heaps---------- 00:06:43.739 9 mempools totaling size 603.782043 MiB 00:06:43.739 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:43.739 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:43.739 size: 100.555481 MiB name: bdev_io_1807511 00:06:43.739 size: 50.003479 MiB name: msgpool_1807511 00:06:43.739 size: 36.509338 MiB name: fsdev_io_1807511 00:06:43.739 size: 21.763794 MiB name: PDU_Pool 00:06:43.739 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:43.739 size: 4.133484 MiB name: evtpool_1807511 00:06:43.739 size: 0.026123 MiB name: Session_Pool 00:06:43.739 end mempools------- 00:06:43.739 6 memzones totaling size 4.142822 MiB 00:06:43.739 size: 1.000366 MiB name: RG_ring_0_1807511 00:06:43.739 size: 1.000366 MiB name: RG_ring_1_1807511 00:06:43.739 size: 1.000366 MiB name: RG_ring_4_1807511 00:06:43.739 size: 1.000366 MiB name: RG_ring_5_1807511 00:06:43.739 size: 0.125366 MiB name: RG_ring_2_1807511 00:06:43.739 size: 0.015991 MiB name: RG_ring_3_1807511 00:06:43.739 end memzones------- 00:06:43.739 12:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:43.739 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:43.739 list of free elements. size: 10.852478 MiB 00:06:43.739 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:43.739 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:43.739 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:43.739 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:43.739 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:43.739 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:43.739 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:43.739 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:43.739 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:43.739 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:43.739 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:43.739 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:43.739 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:43.739 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:43.739 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:43.739 list of standard malloc elements. size: 199.218628 MiB 00:06:43.739 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:43.739 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:43.739 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:43.739 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:43.739 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:43.739 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:43.739 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:43.739 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:43.739 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:43.739 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:43.739 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:43.739 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:43.739 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:43.739 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:43.739 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:43.739 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:43.739 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:43.739 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:43.739 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:43.739 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:43.739 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:43.739 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:43.739 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:43.739 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:43.739 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:43.739 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:43.739 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:43.739 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:43.739 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:43.739 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:43.739 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:43.739 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:43.739 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:43.739 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:43.739 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:43.739 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:43.740 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:43.740 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:43.740 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:43.740 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:43.740 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:43.740 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:43.740 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:43.740 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:43.740 list of memzone associated elements. size: 607.928894 MiB 00:06:43.740 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:43.740 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:43.740 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:43.740 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:43.740 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:43.740 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1807511_0 00:06:43.740 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:43.740 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1807511_0 00:06:43.740 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:43.740 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1807511_0 00:06:43.740 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:43.740 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:43.740 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:43.740 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:43.740 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:43.740 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1807511_0 00:06:43.740 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:43.740 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1807511 00:06:43.740 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:43.740 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1807511 00:06:43.740 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:43.740 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:43.740 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:43.740 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:43.740 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:43.740 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:43.740 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:43.740 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:43.740 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:43.740 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1807511 00:06:43.740 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:43.740 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1807511 00:06:43.740 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:43.740 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1807511 00:06:43.740 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:43.740 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1807511 00:06:43.740 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:43.740 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1807511 00:06:43.740 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:43.740 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1807511 00:06:43.740 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:43.740 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:43.740 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:43.740 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:43.740 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:43.740 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:43.740 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:43.740 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1807511 00:06:43.740 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:43.740 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1807511 00:06:43.740 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:43.740 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:43.740 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:43.740 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:43.740 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:43.740 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1807511 00:06:43.740 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:43.740 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:43.740 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:43.740 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1807511 00:06:43.740 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:43.740 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1807511 00:06:43.740 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:43.740 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1807511 00:06:43.740 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:43.740 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:43.740 12:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:43.740 12:50:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1807511 00:06:43.740 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1807511 ']' 00:06:43.740 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1807511 00:06:43.740 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:43.740 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.740 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1807511 00:06:43.740 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.740 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.740 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1807511' 00:06:43.740 killing process with pid 1807511 00:06:43.740 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1807511 00:06:43.740 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1807511 00:06:44.000 00:06:44.000 real 0m1.011s 00:06:44.000 user 0m0.940s 00:06:44.000 sys 0m0.420s 00:06:44.000 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.000 12:50:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:44.000 ************************************ 00:06:44.000 END TEST dpdk_mem_utility 00:06:44.000 ************************************ 00:06:44.258 12:50:43 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:44.258 12:50:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.258 12:50:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.258 12:50:43 -- common/autotest_common.sh@10 -- # set +x 00:06:44.258 ************************************ 00:06:44.258 START TEST event 00:06:44.258 ************************************ 00:06:44.258 12:50:43 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:44.258 * Looking for test storage... 00:06:44.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:44.258 12:50:43 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.258 12:50:43 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.258 12:50:43 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.258 12:50:44 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.258 12:50:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.258 12:50:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.258 12:50:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.258 12:50:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.258 12:50:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.258 12:50:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.258 12:50:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.258 12:50:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.258 12:50:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.258 12:50:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.258 12:50:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.258 12:50:44 event -- scripts/common.sh@344 -- # case "$op" in 00:06:44.258 12:50:44 event -- scripts/common.sh@345 -- # : 1 00:06:44.258 12:50:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.258 12:50:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.258 12:50:44 event -- scripts/common.sh@365 -- # decimal 1 00:06:44.258 12:50:44 event -- scripts/common.sh@353 -- # local d=1 00:06:44.258 12:50:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.258 12:50:44 event -- scripts/common.sh@355 -- # echo 1 00:06:44.258 12:50:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.258 12:50:44 event -- scripts/common.sh@366 -- # decimal 2 00:06:44.258 12:50:44 event -- scripts/common.sh@353 -- # local d=2 00:06:44.258 12:50:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.258 12:50:44 event -- scripts/common.sh@355 -- # echo 2 00:06:44.258 12:50:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.258 12:50:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.258 12:50:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.258 12:50:44 event -- scripts/common.sh@368 -- # return 0 00:06:44.258 12:50:44 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.258 12:50:44 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.258 --rc genhtml_branch_coverage=1 00:06:44.258 --rc genhtml_function_coverage=1 00:06:44.258 --rc genhtml_legend=1 00:06:44.258 --rc geninfo_all_blocks=1 00:06:44.258 --rc geninfo_unexecuted_blocks=1 00:06:44.258 00:06:44.258 ' 00:06:44.258 12:50:44 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.258 --rc genhtml_branch_coverage=1 00:06:44.258 --rc genhtml_function_coverage=1 00:06:44.258 --rc genhtml_legend=1 00:06:44.258 --rc geninfo_all_blocks=1 00:06:44.258 --rc geninfo_unexecuted_blocks=1 00:06:44.258 00:06:44.258 ' 00:06:44.258 12:50:44 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.258 --rc genhtml_branch_coverage=1 00:06:44.258 --rc genhtml_function_coverage=1 00:06:44.258 --rc genhtml_legend=1 00:06:44.258 --rc geninfo_all_blocks=1 00:06:44.258 --rc geninfo_unexecuted_blocks=1 00:06:44.258 00:06:44.259 ' 00:06:44.259 12:50:44 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.259 --rc genhtml_branch_coverage=1 00:06:44.259 --rc genhtml_function_coverage=1 00:06:44.259 --rc genhtml_legend=1 00:06:44.259 --rc geninfo_all_blocks=1 00:06:44.259 --rc geninfo_unexecuted_blocks=1 00:06:44.259 00:06:44.259 ' 00:06:44.259 12:50:44 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:44.259 12:50:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:44.259 12:50:44 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:44.259 12:50:44 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:44.259 12:50:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.259 12:50:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.517 ************************************ 00:06:44.517 START TEST event_perf 00:06:44.517 ************************************ 00:06:44.517 12:50:44 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:44.517 Running I/O for 1 seconds...[2024-11-29 12:50:44.102455] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:44.517 [2024-11-29 12:50:44.102512] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1807801 ] 00:06:44.517 [2024-11-29 12:50:44.168492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.517 [2024-11-29 12:50:44.212463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.517 [2024-11-29 12:50:44.212575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.517 [2024-11-29 12:50:44.212666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.517 [2024-11-29 12:50:44.212669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.454 Running I/O for 1 seconds... 00:06:45.454 lcore 0: 206884 00:06:45.454 lcore 1: 206883 00:06:45.454 lcore 2: 206882 00:06:45.454 lcore 3: 206883 00:06:45.454 done. 00:06:45.454 00:06:45.454 real 0m1.173s 00:06:45.454 user 0m4.102s 00:06:45.454 sys 0m0.067s 00:06:45.454 12:50:45 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.454 12:50:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.454 ************************************ 00:06:45.454 END TEST event_perf 00:06:45.454 ************************************ 00:06:45.712 12:50:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:45.713 12:50:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:45.713 12:50:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.713 12:50:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.713 ************************************ 00:06:45.713 START TEST event_reactor 00:06:45.713 ************************************ 00:06:45.713 12:50:45 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:45.713 [2024-11-29 12:50:45.347245] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:45.713 [2024-11-29 12:50:45.347320] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808051 ] 00:06:45.713 [2024-11-29 12:50:45.413895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.713 [2024-11-29 12:50:45.453372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.116 test_start 00:06:47.116 oneshot 00:06:47.116 tick 100 00:06:47.116 tick 100 00:06:47.116 tick 250 00:06:47.116 tick 100 00:06:47.116 tick 100 00:06:47.116 tick 250 00:06:47.116 tick 100 00:06:47.116 tick 500 00:06:47.116 tick 100 00:06:47.116 tick 100 00:06:47.116 tick 250 00:06:47.116 tick 100 00:06:47.116 tick 100 00:06:47.116 test_end 00:06:47.116 00:06:47.116 real 0m1.165s 00:06:47.116 user 0m1.098s 00:06:47.116 sys 0m0.063s 00:06:47.116 12:50:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.116 12:50:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:47.116 ************************************ 00:06:47.116 END TEST event_reactor 00:06:47.116 ************************************ 00:06:47.116 12:50:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:47.116 12:50:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:47.116 12:50:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.116 12:50:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.116 ************************************ 00:06:47.116 START TEST event_reactor_perf 00:06:47.116 ************************************ 00:06:47.116 12:50:46 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:47.116 [2024-11-29 12:50:46.584794] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:47.116 [2024-11-29 12:50:46.584859] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808268 ] 00:06:47.116 [2024-11-29 12:50:46.651651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.116 [2024-11-29 12:50:46.690887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.053 test_start 00:06:48.053 test_end 00:06:48.053 Performance: 508800 events per second 00:06:48.053 00:06:48.053 real 0m1.166s 00:06:48.053 user 0m1.093s 00:06:48.053 sys 0m0.070s 00:06:48.053 12:50:47 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.053 12:50:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.053 ************************************ 00:06:48.053 END TEST event_reactor_perf 00:06:48.053 ************************************ 00:06:48.053 12:50:47 event -- event/event.sh@49 -- # uname -s 00:06:48.053 12:50:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:48.053 12:50:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:48.053 12:50:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.053 12:50:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.053 12:50:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.053 ************************************ 00:06:48.053 START TEST event_scheduler 00:06:48.053 ************************************ 00:06:48.053 12:50:47 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:48.053 * Looking for test storage... 00:06:48.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.313 12:50:47 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.313 --rc genhtml_branch_coverage=1 00:06:48.313 --rc genhtml_function_coverage=1 00:06:48.313 --rc genhtml_legend=1 00:06:48.313 --rc geninfo_all_blocks=1 00:06:48.313 --rc geninfo_unexecuted_blocks=1 00:06:48.313 00:06:48.313 ' 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.313 --rc genhtml_branch_coverage=1 00:06:48.313 --rc genhtml_function_coverage=1 00:06:48.313 --rc genhtml_legend=1 00:06:48.313 --rc geninfo_all_blocks=1 00:06:48.313 --rc geninfo_unexecuted_blocks=1 00:06:48.313 00:06:48.313 ' 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.313 --rc genhtml_branch_coverage=1 00:06:48.313 --rc genhtml_function_coverage=1 00:06:48.313 --rc genhtml_legend=1 00:06:48.313 --rc geninfo_all_blocks=1 00:06:48.313 --rc geninfo_unexecuted_blocks=1 00:06:48.313 00:06:48.313 ' 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.313 --rc genhtml_branch_coverage=1 00:06:48.313 --rc genhtml_function_coverage=1 00:06:48.313 --rc genhtml_legend=1 00:06:48.313 --rc geninfo_all_blocks=1 00:06:48.313 --rc geninfo_unexecuted_blocks=1 00:06:48.313 00:06:48.313 ' 00:06:48.313 12:50:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:48.313 12:50:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1808571 00:06:48.313 12:50:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.313 12:50:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:48.313 12:50:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1808571 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1808571 ']' 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.313 12:50:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.313 [2024-11-29 12:50:48.011063] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:48.313 [2024-11-29 12:50:48.011115] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808571 ] 00:06:48.313 [2024-11-29 12:50:48.069092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.313 [2024-11-29 12:50:48.115819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.313 [2024-11-29 12:50:48.115908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.313 [2024-11-29 12:50:48.116013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.313 [2024-11-29 12:50:48.116024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.573 12:50:48 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.573 12:50:48 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:48.573 12:50:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:48.573 12:50:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.573 12:50:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.573 [2024-11-29 12:50:48.172567] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:48.573 [2024-11-29 12:50:48.172586] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:48.573 [2024-11-29 12:50:48.172595] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:48.573 [2024-11-29 12:50:48.172600] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:48.573 [2024-11-29 12:50:48.172605] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:48.573 12:50:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.573 12:50:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:48.573 12:50:48 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.573 12:50:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.573 [2024-11-29 12:50:48.248497] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:48.573 12:50:48 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.573 12:50:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:48.573 12:50:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.573 12:50:48 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.573 12:50:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.573 ************************************ 00:06:48.573 START TEST scheduler_create_thread 00:06:48.573 ************************************ 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.573 2 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.573 3 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.573 4 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.573 12:50:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.574 5 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.574 6 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.574 7 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.574 8 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.574 9 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.574 10 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.574 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.142 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.142 12:50:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:49.142 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.142 12:50:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.520 12:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.520 12:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:50.520 12:50:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:50.520 12:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.520 12:50:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.897 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.897 00:06:51.897 real 0m3.102s 00:06:51.897 user 0m0.020s 00:06:51.897 sys 0m0.009s 00:06:51.897 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.897 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.897 ************************************ 00:06:51.897 END TEST scheduler_create_thread 00:06:51.897 ************************************ 00:06:51.897 12:50:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:51.897 12:50:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1808571 00:06:51.897 12:50:51 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1808571 ']' 00:06:51.897 12:50:51 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1808571 00:06:51.897 12:50:51 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:51.897 12:50:51 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.897 12:50:51 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1808571 00:06:51.897 12:50:51 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:51.897 12:50:51 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:51.897 12:50:51 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1808571' 00:06:51.897 killing process with pid 1808571 00:06:51.897 12:50:51 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1808571 00:06:51.897 12:50:51 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1808571 00:06:52.157 [2024-11-29 12:50:51.767869] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:52.157 00:06:52.157 real 0m4.156s 00:06:52.157 user 0m6.692s 00:06:52.157 sys 0m0.358s 00:06:52.157 12:50:51 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.157 12:50:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:52.157 ************************************ 00:06:52.157 END TEST event_scheduler 00:06:52.157 ************************************ 00:06:52.416 12:50:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:52.416 12:50:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:52.416 12:50:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.416 12:50:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.416 12:50:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.416 ************************************ 00:06:52.416 START TEST app_repeat 00:06:52.416 ************************************ 00:06:52.416 12:50:52 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:52.416 12:50:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.416 12:50:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.417 12:50:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:52.417 12:50:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.417 12:50:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:52.417 12:50:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:52.417 12:50:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:52.417 12:50:52 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:52.417 12:50:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1809262 00:06:52.417 12:50:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.417 12:50:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1809262' 00:06:52.417 Process app_repeat pid: 1809262 00:06:52.417 12:50:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:52.417 12:50:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:52.417 spdk_app_start Round 0 00:06:52.417 12:50:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1809262 /var/tmp/spdk-nbd.sock 00:06:52.417 12:50:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1809262 ']' 00:06:52.417 12:50:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.417 12:50:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.417 12:50:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.417 12:50:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.417 12:50:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.417 [2024-11-29 12:50:52.045477] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:06:52.417 [2024-11-29 12:50:52.045524] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1809262 ] 00:06:52.417 [2024-11-29 12:50:52.108585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.417 [2024-11-29 12:50:52.154316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.417 [2024-11-29 12:50:52.154321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.675 12:50:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.675 12:50:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:52.675 12:50:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.675 Malloc0 00:06:52.675 12:50:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.934 Malloc1 00:06:52.934 12:50:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.934 12:50:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:53.193 /dev/nbd0 00:06:53.193 12:50:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:53.193 12:50:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.193 1+0 records in 00:06:53.193 1+0 records out 00:06:53.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195873 s, 20.9 MB/s 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.193 12:50:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:53.193 12:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.193 12:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.193 12:50:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:53.452 /dev/nbd1 00:06:53.452 12:50:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:53.452 12:50:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.452 1+0 records in 00:06:53.452 1+0 records out 00:06:53.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207842 s, 19.7 MB/s 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.452 12:50:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:53.452 12:50:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.452 12:50:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.452 12:50:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.452 12:50:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.452 12:50:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.711 12:50:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.711 { 00:06:53.711 "nbd_device": "/dev/nbd0", 00:06:53.711 "bdev_name": "Malloc0" 00:06:53.711 }, 00:06:53.711 { 00:06:53.711 "nbd_device": "/dev/nbd1", 00:06:53.711 "bdev_name": "Malloc1" 00:06:53.711 } 00:06:53.711 ]' 00:06:53.711 12:50:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.711 { 00:06:53.711 "nbd_device": "/dev/nbd0", 00:06:53.711 "bdev_name": "Malloc0" 00:06:53.711 }, 00:06:53.711 { 00:06:53.711 "nbd_device": "/dev/nbd1", 00:06:53.711 "bdev_name": "Malloc1" 00:06:53.711 } 00:06:53.711 ]' 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.712 /dev/nbd1' 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.712 /dev/nbd1' 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:53.712 256+0 records in 00:06:53.712 256+0 records out 00:06:53.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106634 s, 98.3 MB/s 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.712 256+0 records in 00:06:53.712 256+0 records out 00:06:53.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141764 s, 74.0 MB/s 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.712 256+0 records in 00:06:53.712 256+0 records out 00:06:53.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147852 s, 70.9 MB/s 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.712 12:50:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.971 12:50:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.971 12:50:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.971 12:50:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.971 12:50:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.971 12:50:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.971 12:50:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.971 12:50:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.971 12:50:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.971 12:50:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.971 12:50:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.230 12:50:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.230 12:50:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.230 12:50:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.230 12:50:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.230 12:50:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.230 12:50:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.230 12:50:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.230 12:50:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.230 12:50:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.230 12:50:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.230 12:50:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.489 12:50:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.489 12:50:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.489 12:50:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.489 12:50:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.489 12:50:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.489 12:50:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.489 12:50:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:54.489 12:50:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.489 12:50:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.489 12:50:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:54.489 12:50:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:54.489 12:50:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:54.489 12:50:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.749 12:50:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:54.749 [2024-11-29 12:50:54.472545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.749 [2024-11-29 12:50:54.509550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.749 [2024-11-29 12:50:54.509553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.749 [2024-11-29 12:50:54.550729] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.749 [2024-11-29 12:50:54.550774] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:58.149 12:50:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:58.149 12:50:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:58.149 spdk_app_start Round 1 00:06:58.149 12:50:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1809262 /var/tmp/spdk-nbd.sock 00:06:58.149 12:50:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1809262 ']' 00:06:58.149 12:50:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.149 12:50:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.149 12:50:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.149 12:50:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.149 12:50:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.149 12:50:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.149 12:50:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:58.149 12:50:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.149 Malloc0 00:06:58.149 12:50:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.149 Malloc1 00:06:58.149 12:50:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.149 12:50:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.440 /dev/nbd0 00:06:58.440 12:50:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.440 12:50:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.440 1+0 records in 00:06:58.440 1+0 records out 00:06:58.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181916 s, 22.5 MB/s 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.440 12:50:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:58.440 12:50:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.440 12:50:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.440 12:50:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:58.708 /dev/nbd1 00:06:58.709 12:50:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.709 12:50:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.709 1+0 records in 00:06:58.709 1+0 records out 00:06:58.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205183 s, 20.0 MB/s 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.709 12:50:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:58.709 12:50:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.709 12:50:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.709 12:50:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.709 12:50:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.709 12:50:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.975 { 00:06:58.975 "nbd_device": "/dev/nbd0", 00:06:58.975 "bdev_name": "Malloc0" 00:06:58.975 }, 00:06:58.975 { 00:06:58.975 "nbd_device": "/dev/nbd1", 00:06:58.975 "bdev_name": "Malloc1" 00:06:58.975 } 00:06:58.975 ]' 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.975 { 00:06:58.975 "nbd_device": "/dev/nbd0", 00:06:58.975 "bdev_name": "Malloc0" 00:06:58.975 }, 00:06:58.975 { 00:06:58.975 "nbd_device": "/dev/nbd1", 00:06:58.975 "bdev_name": "Malloc1" 00:06:58.975 } 00:06:58.975 ]' 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:58.975 /dev/nbd1' 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:58.975 /dev/nbd1' 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:58.975 256+0 records in 00:06:58.975 256+0 records out 00:06:58.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00336743 s, 311 MB/s 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:58.975 256+0 records in 00:06:58.975 256+0 records out 00:06:58.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136405 s, 76.9 MB/s 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:58.975 256+0 records in 00:06:58.975 256+0 records out 00:06:58.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149995 s, 69.9 MB/s 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.975 12:50:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.235 12:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.235 12:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.235 12:50:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.235 12:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.235 12:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.235 12:50:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.235 12:50:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.235 12:50:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.235 12:50:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.235 12:50:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.494 12:50:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.494 12:50:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.494 12:50:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.494 12:50:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.494 12:50:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.494 12:50:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.494 12:50:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.494 12:50:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.494 12:50:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.494 12:50:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.494 12:50:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.494 12:50:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.753 12:50:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.753 12:50:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.753 12:50:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.753 12:50:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.753 12:50:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.753 12:50:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:59.753 12:50:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.753 12:50:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.753 12:50:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:59.753 12:50:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:59.753 12:50:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:59.753 12:50:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:59.753 12:50:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:00.011 [2024-11-29 12:50:59.722428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.011 [2024-11-29 12:50:59.759358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.011 [2024-11-29 12:50:59.759361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.011 [2024-11-29 12:50:59.800684] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:00.011 [2024-11-29 12:50:59.800725] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.298 12:51:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:03.298 12:51:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:03.298 spdk_app_start Round 2 00:07:03.298 12:51:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1809262 /var/tmp/spdk-nbd.sock 00:07:03.298 12:51:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1809262 ']' 00:07:03.298 12:51:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.298 12:51:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.298 12:51:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.298 12:51:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.298 12:51:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.298 12:51:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.298 12:51:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:03.298 12:51:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.298 Malloc0 00:07:03.298 12:51:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.557 Malloc1 00:07:03.557 12:51:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.557 12:51:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:03.557 /dev/nbd0 00:07:03.815 12:51:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.815 12:51:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.815 1+0 records in 00:07:03.815 1+0 records out 00:07:03.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0022446 s, 1.8 MB/s 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.815 12:51:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:03.816 12:51:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.816 12:51:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.816 12:51:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:03.816 /dev/nbd1 00:07:04.075 12:51:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.075 12:51:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.075 1+0 records in 00:07:04.075 1+0 records out 00:07:04.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241299 s, 17.0 MB/s 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.075 12:51:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:04.075 12:51:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.075 12:51:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.075 12:51:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.075 12:51:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.075 12:51:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.075 12:51:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.075 { 00:07:04.075 "nbd_device": "/dev/nbd0", 00:07:04.075 "bdev_name": "Malloc0" 00:07:04.075 }, 00:07:04.075 { 00:07:04.075 "nbd_device": "/dev/nbd1", 00:07:04.075 "bdev_name": "Malloc1" 00:07:04.075 } 00:07:04.075 ]' 00:07:04.075 12:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.075 { 00:07:04.075 "nbd_device": "/dev/nbd0", 00:07:04.075 "bdev_name": "Malloc0" 00:07:04.075 }, 00:07:04.075 { 00:07:04.075 "nbd_device": "/dev/nbd1", 00:07:04.075 "bdev_name": "Malloc1" 00:07:04.075 } 00:07:04.075 ]' 00:07:04.075 12:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.334 /dev/nbd1' 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.334 /dev/nbd1' 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.334 256+0 records in 00:07:04.334 256+0 records out 00:07:04.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010387 s, 101 MB/s 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.334 256+0 records in 00:07:04.334 256+0 records out 00:07:04.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143534 s, 73.1 MB/s 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.334 256+0 records in 00:07:04.334 256+0 records out 00:07:04.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152071 s, 69.0 MB/s 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.334 12:51:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.335 12:51:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.335 12:51:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.335 12:51:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.335 12:51:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.335 12:51:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.335 12:51:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:04.335 12:51:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.335 12:51:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.593 12:51:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.593 12:51:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.594 12:51:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:04.852 12:51:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:04.852 12:51:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:05.112 12:51:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:05.371 [2024-11-29 12:51:05.017178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.371 [2024-11-29 12:51:05.054688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.371 [2024-11-29 12:51:05.054691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.371 [2024-11-29 12:51:05.096390] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:05.371 [2024-11-29 12:51:05.096430] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:08.658 12:51:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1809262 /var/tmp/spdk-nbd.sock 00:07:08.658 12:51:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1809262 ']' 00:07:08.658 12:51:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:08.658 12:51:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.658 12:51:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:08.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:08.658 12:51:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.658 12:51:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:08.658 12:51:08 event.app_repeat -- event/event.sh@39 -- # killprocess 1809262 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1809262 ']' 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1809262 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1809262 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1809262' 00:07:08.658 killing process with pid 1809262 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1809262 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1809262 00:07:08.658 spdk_app_start is called in Round 0. 00:07:08.658 Shutdown signal received, stop current app iteration 00:07:08.658 Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 reinitialization... 00:07:08.658 spdk_app_start is called in Round 1. 00:07:08.658 Shutdown signal received, stop current app iteration 00:07:08.658 Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 reinitialization... 00:07:08.658 spdk_app_start is called in Round 2. 00:07:08.658 Shutdown signal received, stop current app iteration 00:07:08.658 Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 reinitialization... 00:07:08.658 spdk_app_start is called in Round 3. 00:07:08.658 Shutdown signal received, stop current app iteration 00:07:08.658 12:51:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:08.658 12:51:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:08.658 00:07:08.658 real 0m16.225s 00:07:08.658 user 0m35.636s 00:07:08.658 sys 0m2.476s 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.658 12:51:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.658 ************************************ 00:07:08.658 END TEST app_repeat 00:07:08.658 ************************************ 00:07:08.658 12:51:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:08.658 12:51:08 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:08.658 12:51:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.658 12:51:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.658 12:51:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.658 ************************************ 00:07:08.658 START TEST cpu_locks 00:07:08.658 ************************************ 00:07:08.658 12:51:08 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:08.658 * Looking for test storage... 00:07:08.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:08.658 12:51:08 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:08.658 12:51:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:08.658 12:51:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:08.659 12:51:08 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.659 12:51:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:08.918 12:51:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.918 12:51:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:08.918 12:51:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:08.918 12:51:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.918 12:51:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:08.918 12:51:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.918 12:51:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.918 12:51:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.918 12:51:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:08.918 12:51:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.918 12:51:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:08.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.918 --rc genhtml_branch_coverage=1 00:07:08.918 --rc genhtml_function_coverage=1 00:07:08.918 --rc genhtml_legend=1 00:07:08.918 --rc geninfo_all_blocks=1 00:07:08.918 --rc geninfo_unexecuted_blocks=1 00:07:08.918 00:07:08.918 ' 00:07:08.918 12:51:08 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:08.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.918 --rc genhtml_branch_coverage=1 00:07:08.918 --rc genhtml_function_coverage=1 00:07:08.918 --rc genhtml_legend=1 00:07:08.918 --rc geninfo_all_blocks=1 00:07:08.918 --rc geninfo_unexecuted_blocks=1 00:07:08.918 00:07:08.918 ' 00:07:08.918 12:51:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:08.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.918 --rc genhtml_branch_coverage=1 00:07:08.918 --rc genhtml_function_coverage=1 00:07:08.918 --rc genhtml_legend=1 00:07:08.918 --rc geninfo_all_blocks=1 00:07:08.918 --rc geninfo_unexecuted_blocks=1 00:07:08.918 00:07:08.918 ' 00:07:08.918 12:51:08 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:08.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.918 --rc genhtml_branch_coverage=1 00:07:08.918 --rc genhtml_function_coverage=1 00:07:08.918 --rc genhtml_legend=1 00:07:08.918 --rc geninfo_all_blocks=1 00:07:08.918 --rc geninfo_unexecuted_blocks=1 00:07:08.918 00:07:08.918 ' 00:07:08.918 12:51:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:08.918 12:51:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:08.918 12:51:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:08.918 12:51:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:08.918 12:51:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.918 12:51:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.918 12:51:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.918 ************************************ 00:07:08.918 START TEST default_locks 00:07:08.918 ************************************ 00:07:08.918 12:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:08.918 12:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1812562 00:07:08.918 12:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1812562 00:07:08.918 12:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.918 12:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1812562 ']' 00:07:08.918 12:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.918 12:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.918 12:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.918 12:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.918 12:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.918 [2024-11-29 12:51:08.578610] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:08.918 [2024-11-29 12:51:08.578652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1812562 ] 00:07:08.918 [2024-11-29 12:51:08.642370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.918 [2024-11-29 12:51:08.685309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.177 12:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.177 12:51:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:09.177 12:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1812562 00:07:09.177 12:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1812562 00:07:09.177 12:51:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.746 lslocks: write error 00:07:09.746 12:51:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1812562 00:07:09.746 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1812562 ']' 00:07:09.746 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1812562 00:07:09.746 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:09.746 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.746 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1812562 00:07:09.746 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.746 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.746 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1812562' 00:07:09.746 killing process with pid 1812562 00:07:09.746 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1812562 00:07:09.746 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1812562 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1812562 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1812562 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1812562 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1812562 ']' 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1812562) - No such process 00:07:10.004 ERROR: process (pid: 1812562) is no longer running 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:10.004 00:07:10.004 real 0m1.240s 00:07:10.004 user 0m1.208s 00:07:10.004 sys 0m0.566s 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.004 12:51:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.004 ************************************ 00:07:10.004 END TEST default_locks 00:07:10.004 ************************************ 00:07:10.004 12:51:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:10.004 12:51:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.004 12:51:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.004 12:51:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.263 ************************************ 00:07:10.263 START TEST default_locks_via_rpc 00:07:10.263 ************************************ 00:07:10.263 12:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:10.263 12:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1813035 00:07:10.263 12:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1813035 00:07:10.263 12:51:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.263 12:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1813035 ']' 00:07:10.263 12:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.263 12:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.263 12:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.263 12:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.263 12:51:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.263 [2024-11-29 12:51:09.884095] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:10.263 [2024-11-29 12:51:09.884137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813035 ] 00:07:10.263 [2024-11-29 12:51:09.945226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.263 [2024-11-29 12:51:09.987810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.521 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1813035 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1813035 00:07:10.522 12:51:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.780 12:51:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1813035 00:07:10.781 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1813035 ']' 00:07:10.781 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1813035 00:07:10.781 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:10.781 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.781 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813035 00:07:10.781 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.781 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.781 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813035' 00:07:10.781 killing process with pid 1813035 00:07:10.781 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1813035 00:07:10.781 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1813035 00:07:11.355 00:07:11.355 real 0m1.060s 00:07:11.355 user 0m1.016s 00:07:11.355 sys 0m0.474s 00:07:11.355 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.355 12:51:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.355 ************************************ 00:07:11.355 END TEST default_locks_via_rpc 00:07:11.355 ************************************ 00:07:11.355 12:51:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:11.355 12:51:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.355 12:51:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.355 12:51:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.355 ************************************ 00:07:11.355 START TEST non_locking_app_on_locked_coremask 00:07:11.355 ************************************ 00:07:11.355 12:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:11.355 12:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.355 12:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1813163 00:07:11.355 12:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1813163 /var/tmp/spdk.sock 00:07:11.355 12:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1813163 ']' 00:07:11.355 12:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.355 12:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.355 12:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.355 12:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.355 12:51:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.355 [2024-11-29 12:51:10.994751] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:11.355 [2024-11-29 12:51:10.994791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813163 ] 00:07:11.355 [2024-11-29 12:51:11.056100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.355 [2024-11-29 12:51:11.099821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.615 12:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.615 12:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:11.615 12:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1813375 00:07:11.615 12:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1813375 /var/tmp/spdk2.sock 00:07:11.615 12:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:11.615 12:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1813375 ']' 00:07:11.615 12:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.615 12:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.615 12:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.615 12:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.615 12:51:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.615 [2024-11-29 12:51:11.361248] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:11.615 [2024-11-29 12:51:11.361297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813375 ] 00:07:11.873 [2024-11-29 12:51:11.445357] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.873 [2024-11-29 12:51:11.445379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.873 [2024-11-29 12:51:11.533972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.439 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.439 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:12.439 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1813163 00:07:12.439 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1813163 00:07:12.439 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.007 lslocks: write error 00:07:13.007 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1813163 00:07:13.007 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1813163 ']' 00:07:13.007 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1813163 00:07:13.007 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:13.007 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.007 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813163 00:07:13.007 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.007 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.007 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813163' 00:07:13.007 killing process with pid 1813163 00:07:13.007 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1813163 00:07:13.007 12:51:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1813163 00:07:13.573 12:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1813375 00:07:13.573 12:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1813375 ']' 00:07:13.573 12:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1813375 00:07:13.573 12:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:13.573 12:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.573 12:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813375 00:07:13.573 12:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.573 12:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.573 12:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813375' 00:07:13.573 killing process with pid 1813375 00:07:13.573 12:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1813375 00:07:13.573 12:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1813375 00:07:13.832 00:07:13.832 real 0m2.605s 00:07:13.832 user 0m2.747s 00:07:13.832 sys 0m0.875s 00:07:13.832 12:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.832 12:51:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.832 ************************************ 00:07:13.832 END TEST non_locking_app_on_locked_coremask 00:07:13.832 ************************************ 00:07:13.832 12:51:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:13.832 12:51:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.832 12:51:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.832 12:51:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.832 ************************************ 00:07:13.832 START TEST locking_app_on_unlocked_coremask 00:07:13.832 ************************************ 00:07:13.832 12:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:13.832 12:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1813652 00:07:13.832 12:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1813652 /var/tmp/spdk.sock 00:07:13.832 12:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:13.832 12:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1813652 ']' 00:07:13.832 12:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.832 12:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.832 12:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.832 12:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.832 12:51:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.091 [2024-11-29 12:51:13.687753] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:14.092 [2024-11-29 12:51:13.687799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813652 ] 00:07:14.092 [2024-11-29 12:51:13.751942] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.092 [2024-11-29 12:51:13.751974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.092 [2024-11-29 12:51:13.795246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.350 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.350 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:14.350 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1813841 00:07:14.350 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1813841 /var/tmp/spdk2.sock 00:07:14.350 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:14.350 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1813841 ']' 00:07:14.350 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.350 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.350 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.350 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.350 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.350 [2024-11-29 12:51:14.058453] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:14.350 [2024-11-29 12:51:14.058501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813841 ] 00:07:14.350 [2024-11-29 12:51:14.146787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.610 [2024-11-29 12:51:14.233782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.177 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.177 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:15.177 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1813841 00:07:15.177 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1813841 00:07:15.177 12:51:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.743 lslocks: write error 00:07:15.743 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1813652 00:07:15.743 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1813652 ']' 00:07:15.743 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1813652 00:07:15.743 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:15.743 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.743 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813652 00:07:15.743 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.743 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.743 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813652' 00:07:15.743 killing process with pid 1813652 00:07:15.743 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1813652 00:07:15.743 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1813652 00:07:16.315 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1813841 00:07:16.315 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1813841 ']' 00:07:16.315 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1813841 00:07:16.315 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:16.315 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.315 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813841 00:07:16.315 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.315 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.315 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813841' 00:07:16.315 killing process with pid 1813841 00:07:16.315 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1813841 00:07:16.315 12:51:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1813841 00:07:16.574 00:07:16.574 real 0m2.655s 00:07:16.574 user 0m2.803s 00:07:16.574 sys 0m0.873s 00:07:16.574 12:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.574 12:51:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.574 ************************************ 00:07:16.574 END TEST locking_app_on_unlocked_coremask 00:07:16.574 ************************************ 00:07:16.574 12:51:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:16.574 12:51:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.574 12:51:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.574 12:51:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.574 ************************************ 00:07:16.574 START TEST locking_app_on_locked_coremask 00:07:16.574 ************************************ 00:07:16.574 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:16.574 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1814151 00:07:16.574 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1814151 /var/tmp/spdk.sock 00:07:16.574 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.574 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1814151 ']' 00:07:16.574 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.574 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.574 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.574 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.574 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.832 [2024-11-29 12:51:16.406190] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:16.832 [2024-11-29 12:51:16.406234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814151 ] 00:07:16.832 [2024-11-29 12:51:16.466698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.832 [2024-11-29 12:51:16.509387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1814313 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1814313 /var/tmp/spdk2.sock 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1814313 /var/tmp/spdk2.sock 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1814313 /var/tmp/spdk2.sock 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1814313 ']' 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.091 12:51:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.091 [2024-11-29 12:51:16.774815] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:17.091 [2024-11-29 12:51:16.774865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814313 ] 00:07:17.091 [2024-11-29 12:51:16.865628] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1814151 has claimed it. 00:07:17.091 [2024-11-29 12:51:16.865669] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:17.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1814313) - No such process 00:07:17.658 ERROR: process (pid: 1814313) is no longer running 00:07:17.658 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.658 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:17.658 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:17.658 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.658 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.658 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.658 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1814151 00:07:17.658 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1814151 00:07:17.658 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.226 lslocks: write error 00:07:18.226 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1814151 00:07:18.226 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1814151 ']' 00:07:18.226 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1814151 00:07:18.226 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:18.226 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.226 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1814151 00:07:18.226 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.226 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.226 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1814151' 00:07:18.226 killing process with pid 1814151 00:07:18.226 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1814151 00:07:18.226 12:51:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1814151 00:07:18.485 00:07:18.485 real 0m1.838s 00:07:18.485 user 0m1.993s 00:07:18.485 sys 0m0.616s 00:07:18.485 12:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.485 12:51:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.485 ************************************ 00:07:18.485 END TEST locking_app_on_locked_coremask 00:07:18.485 ************************************ 00:07:18.485 12:51:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:18.485 12:51:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.485 12:51:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.485 12:51:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.485 ************************************ 00:07:18.485 START TEST locking_overlapped_coremask 00:07:18.485 ************************************ 00:07:18.485 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:18.485 12:51:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1814633 00:07:18.485 12:51:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1814633 /var/tmp/spdk.sock 00:07:18.485 12:51:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:18.485 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1814633 ']' 00:07:18.485 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.485 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.486 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.486 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.486 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.745 [2024-11-29 12:51:18.319034] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:18.745 [2024-11-29 12:51:18.319079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814633 ] 00:07:18.745 [2024-11-29 12:51:18.380254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.745 [2024-11-29 12:51:18.420658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.745 [2024-11-29 12:51:18.420756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.745 [2024-11-29 12:51:18.420757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1814641 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1814641 /var/tmp/spdk2.sock 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1814641 /var/tmp/spdk2.sock 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1814641 /var/tmp/spdk2.sock 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1814641 ']' 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.004 12:51:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.004 [2024-11-29 12:51:18.684616] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:19.004 [2024-11-29 12:51:18.684665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814641 ] 00:07:19.004 [2024-11-29 12:51:18.776439] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1814633 has claimed it. 00:07:19.005 [2024-11-29 12:51:18.776479] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1814641) - No such process 00:07:19.572 ERROR: process (pid: 1814641) is no longer running 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1814633 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1814633 ']' 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1814633 00:07:19.572 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:19.573 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.573 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1814633 00:07:19.573 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.573 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.573 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1814633' 00:07:19.573 killing process with pid 1814633 00:07:19.573 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1814633 00:07:19.573 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1814633 00:07:20.141 00:07:20.141 real 0m1.426s 00:07:20.141 user 0m3.937s 00:07:20.141 sys 0m0.399s 00:07:20.141 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.141 12:51:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.141 ************************************ 00:07:20.141 END TEST locking_overlapped_coremask 00:07:20.141 ************************************ 00:07:20.141 12:51:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:20.141 12:51:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.141 12:51:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.141 12:51:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.141 ************************************ 00:07:20.141 START TEST locking_overlapped_coremask_via_rpc 00:07:20.141 ************************************ 00:07:20.141 12:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:20.141 12:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1814900 00:07:20.141 12:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1814900 /var/tmp/spdk.sock 00:07:20.141 12:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:20.141 12:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1814900 ']' 00:07:20.141 12:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.141 12:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.141 12:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.141 12:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.141 12:51:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.141 [2024-11-29 12:51:19.812083] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:20.141 [2024-11-29 12:51:19.812126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814900 ] 00:07:20.141 [2024-11-29 12:51:19.874246] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.141 [2024-11-29 12:51:19.874269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.141 [2024-11-29 12:51:19.919348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.141 [2024-11-29 12:51:19.919442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.141 [2024-11-29 12:51:19.919444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.400 12:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.400 12:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:20.400 12:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1814909 00:07:20.400 12:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1814909 /var/tmp/spdk2.sock 00:07:20.400 12:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:20.400 12:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1814909 ']' 00:07:20.400 12:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.400 12:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.400 12:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.400 12:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.400 12:51:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.400 [2024-11-29 12:51:20.193082] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:20.400 [2024-11-29 12:51:20.193133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814909 ] 00:07:20.659 [2024-11-29 12:51:20.287129] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.659 [2024-11-29 12:51:20.287158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.659 [2024-11-29 12:51:20.382041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.659 [2024-11-29 12:51:20.382153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.659 [2024-11-29 12:51:20.382154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.227 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.227 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.227 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:21.227 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.227 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.485 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.485 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.485 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.486 [2024-11-29 12:51:21.064032] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1814900 has claimed it. 00:07:21.486 request: 00:07:21.486 { 00:07:21.486 "method": "framework_enable_cpumask_locks", 00:07:21.486 "req_id": 1 00:07:21.486 } 00:07:21.486 Got JSON-RPC error response 00:07:21.486 response: 00:07:21.486 { 00:07:21.486 "code": -32603, 00:07:21.486 "message": "Failed to claim CPU core: 2" 00:07:21.486 } 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1814900 /var/tmp/spdk.sock 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1814900 ']' 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1814909 /var/tmp/spdk2.sock 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1814909 ']' 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.486 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.745 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.745 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:21.745 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:21.745 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:21.745 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:21.745 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:21.745 00:07:21.745 real 0m1.711s 00:07:21.745 user 0m0.833s 00:07:21.745 sys 0m0.141s 00:07:21.745 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.745 12:51:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.745 ************************************ 00:07:21.745 END TEST locking_overlapped_coremask_via_rpc 00:07:21.745 ************************************ 00:07:21.745 12:51:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:21.745 12:51:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1814900 ]] 00:07:21.745 12:51:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1814900 00:07:21.745 12:51:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1814900 ']' 00:07:21.745 12:51:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1814900 00:07:21.745 12:51:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:21.745 12:51:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.745 12:51:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1814900 00:07:21.745 12:51:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.745 12:51:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.745 12:51:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1814900' 00:07:21.745 killing process with pid 1814900 00:07:21.745 12:51:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1814900 00:07:21.745 12:51:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1814900 00:07:22.313 12:51:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1814909 ]] 00:07:22.313 12:51:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1814909 00:07:22.313 12:51:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1814909 ']' 00:07:22.313 12:51:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1814909 00:07:22.313 12:51:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:22.313 12:51:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.313 12:51:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1814909 00:07:22.313 12:51:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:22.313 12:51:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:22.313 12:51:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1814909' 00:07:22.313 killing process with pid 1814909 00:07:22.313 12:51:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1814909 00:07:22.313 12:51:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1814909 00:07:22.572 12:51:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.572 12:51:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:22.572 12:51:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1814900 ]] 00:07:22.572 12:51:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1814900 00:07:22.572 12:51:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1814900 ']' 00:07:22.572 12:51:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1814900 00:07:22.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1814900) - No such process 00:07:22.572 12:51:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1814900 is not found' 00:07:22.572 Process with pid 1814900 is not found 00:07:22.572 12:51:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1814909 ]] 00:07:22.572 12:51:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1814909 00:07:22.572 12:51:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1814909 ']' 00:07:22.572 12:51:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1814909 00:07:22.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1814909) - No such process 00:07:22.572 12:51:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1814909 is not found' 00:07:22.572 Process with pid 1814909 is not found 00:07:22.572 12:51:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.572 00:07:22.572 real 0m13.929s 00:07:22.572 user 0m24.316s 00:07:22.572 sys 0m4.912s 00:07:22.572 12:51:22 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.572 12:51:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.572 ************************************ 00:07:22.572 END TEST cpu_locks 00:07:22.572 ************************************ 00:07:22.572 00:07:22.572 real 0m38.401s 00:07:22.572 user 1m13.224s 00:07:22.572 sys 0m8.283s 00:07:22.572 12:51:22 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.572 12:51:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.572 ************************************ 00:07:22.572 END TEST event 00:07:22.572 ************************************ 00:07:22.572 12:51:22 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:22.572 12:51:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.572 12:51:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.572 12:51:22 -- common/autotest_common.sh@10 -- # set +x 00:07:22.572 ************************************ 00:07:22.572 START TEST thread 00:07:22.572 ************************************ 00:07:22.572 12:51:22 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:22.831 * Looking for test storage... 00:07:22.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:22.831 12:51:22 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:22.831 12:51:22 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:22.831 12:51:22 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:22.831 12:51:22 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:22.831 12:51:22 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.831 12:51:22 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.831 12:51:22 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.831 12:51:22 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.831 12:51:22 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.831 12:51:22 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.831 12:51:22 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.831 12:51:22 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.831 12:51:22 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.831 12:51:22 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.831 12:51:22 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.831 12:51:22 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:22.831 12:51:22 thread -- scripts/common.sh@345 -- # : 1 00:07:22.831 12:51:22 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.831 12:51:22 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.831 12:51:22 thread -- scripts/common.sh@365 -- # decimal 1 00:07:22.831 12:51:22 thread -- scripts/common.sh@353 -- # local d=1 00:07:22.831 12:51:22 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.831 12:51:22 thread -- scripts/common.sh@355 -- # echo 1 00:07:22.831 12:51:22 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.831 12:51:22 thread -- scripts/common.sh@366 -- # decimal 2 00:07:22.831 12:51:22 thread -- scripts/common.sh@353 -- # local d=2 00:07:22.831 12:51:22 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.831 12:51:22 thread -- scripts/common.sh@355 -- # echo 2 00:07:22.831 12:51:22 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.831 12:51:22 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.831 12:51:22 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.831 12:51:22 thread -- scripts/common.sh@368 -- # return 0 00:07:22.831 12:51:22 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.831 12:51:22 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:22.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.831 --rc genhtml_branch_coverage=1 00:07:22.831 --rc genhtml_function_coverage=1 00:07:22.831 --rc genhtml_legend=1 00:07:22.831 --rc geninfo_all_blocks=1 00:07:22.831 --rc geninfo_unexecuted_blocks=1 00:07:22.831 00:07:22.831 ' 00:07:22.831 12:51:22 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:22.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.831 --rc genhtml_branch_coverage=1 00:07:22.831 --rc genhtml_function_coverage=1 00:07:22.831 --rc genhtml_legend=1 00:07:22.831 --rc geninfo_all_blocks=1 00:07:22.831 --rc geninfo_unexecuted_blocks=1 00:07:22.831 00:07:22.831 ' 00:07:22.831 12:51:22 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:22.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.831 --rc genhtml_branch_coverage=1 00:07:22.831 --rc genhtml_function_coverage=1 00:07:22.831 --rc genhtml_legend=1 00:07:22.831 --rc geninfo_all_blocks=1 00:07:22.831 --rc geninfo_unexecuted_blocks=1 00:07:22.831 00:07:22.831 ' 00:07:22.831 12:51:22 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:22.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.831 --rc genhtml_branch_coverage=1 00:07:22.831 --rc genhtml_function_coverage=1 00:07:22.831 --rc genhtml_legend=1 00:07:22.831 --rc geninfo_all_blocks=1 00:07:22.831 --rc geninfo_unexecuted_blocks=1 00:07:22.831 00:07:22.831 ' 00:07:22.831 12:51:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.831 12:51:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:22.831 12:51:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.831 12:51:22 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.831 ************************************ 00:07:22.831 START TEST thread_poller_perf 00:07:22.831 ************************************ 00:07:22.831 12:51:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.831 [2024-11-29 12:51:22.569223] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:22.831 [2024-11-29 12:51:22.569293] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815474 ] 00:07:22.831 [2024-11-29 12:51:22.635307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.090 [2024-11-29 12:51:22.676802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.090 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:24.026 [2024-11-29T11:51:23.846Z] ====================================== 00:07:24.026 [2024-11-29T11:51:23.846Z] busy:2306242500 (cyc) 00:07:24.026 [2024-11-29T11:51:23.846Z] total_run_count: 409000 00:07:24.026 [2024-11-29T11:51:23.846Z] tsc_hz: 2300000000 (cyc) 00:07:24.026 [2024-11-29T11:51:23.846Z] ====================================== 00:07:24.026 [2024-11-29T11:51:23.846Z] poller_cost: 5638 (cyc), 2451 (nsec) 00:07:24.026 00:07:24.026 real 0m1.172s 00:07:24.026 user 0m1.108s 00:07:24.026 sys 0m0.060s 00:07:24.026 12:51:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.026 12:51:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.026 ************************************ 00:07:24.026 END TEST thread_poller_perf 00:07:24.026 ************************************ 00:07:24.026 12:51:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.026 12:51:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:24.026 12:51:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.026 12:51:23 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.026 ************************************ 00:07:24.026 START TEST thread_poller_perf 00:07:24.026 ************************************ 00:07:24.026 12:51:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.026 [2024-11-29 12:51:23.809185] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:24.026 [2024-11-29 12:51:23.809242] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815720 ] 00:07:24.284 [2024-11-29 12:51:23.872517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.284 [2024-11-29 12:51:23.912677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.284 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:25.217 [2024-11-29T11:51:25.037Z] ====================================== 00:07:25.217 [2024-11-29T11:51:25.037Z] busy:2301625292 (cyc) 00:07:25.217 [2024-11-29T11:51:25.037Z] total_run_count: 5395000 00:07:25.217 [2024-11-29T11:51:25.037Z] tsc_hz: 2300000000 (cyc) 00:07:25.217 [2024-11-29T11:51:25.037Z] ====================================== 00:07:25.217 [2024-11-29T11:51:25.037Z] poller_cost: 426 (cyc), 185 (nsec) 00:07:25.217 00:07:25.217 real 0m1.163s 00:07:25.217 user 0m1.090s 00:07:25.217 sys 0m0.069s 00:07:25.217 12:51:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.217 12:51:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.217 ************************************ 00:07:25.217 END TEST thread_poller_perf 00:07:25.217 ************************************ 00:07:25.217 12:51:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:25.217 00:07:25.217 real 0m2.642s 00:07:25.217 user 0m2.366s 00:07:25.217 sys 0m0.290s 00:07:25.217 12:51:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.217 12:51:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.217 ************************************ 00:07:25.217 END TEST thread 00:07:25.217 ************************************ 00:07:25.217 12:51:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:25.217 12:51:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:25.217 12:51:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.217 12:51:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.217 12:51:25 -- common/autotest_common.sh@10 -- # set +x 00:07:25.475 ************************************ 00:07:25.475 START TEST app_cmdline 00:07:25.475 ************************************ 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:25.475 * Looking for test storage... 00:07:25.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.475 12:51:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.475 --rc genhtml_branch_coverage=1 00:07:25.475 --rc genhtml_function_coverage=1 00:07:25.475 --rc genhtml_legend=1 00:07:25.475 --rc geninfo_all_blocks=1 00:07:25.475 --rc geninfo_unexecuted_blocks=1 00:07:25.475 00:07:25.475 ' 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.475 --rc genhtml_branch_coverage=1 00:07:25.475 --rc genhtml_function_coverage=1 00:07:25.475 --rc genhtml_legend=1 00:07:25.475 --rc geninfo_all_blocks=1 00:07:25.475 --rc geninfo_unexecuted_blocks=1 00:07:25.475 00:07:25.475 ' 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.475 --rc genhtml_branch_coverage=1 00:07:25.475 --rc genhtml_function_coverage=1 00:07:25.475 --rc genhtml_legend=1 00:07:25.475 --rc geninfo_all_blocks=1 00:07:25.475 --rc geninfo_unexecuted_blocks=1 00:07:25.475 00:07:25.475 ' 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.475 --rc genhtml_branch_coverage=1 00:07:25.475 --rc genhtml_function_coverage=1 00:07:25.475 --rc genhtml_legend=1 00:07:25.475 --rc geninfo_all_blocks=1 00:07:25.475 --rc geninfo_unexecuted_blocks=1 00:07:25.475 00:07:25.475 ' 00:07:25.475 12:51:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:25.475 12:51:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1816019 00:07:25.475 12:51:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1816019 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1816019 ']' 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.475 12:51:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.475 12:51:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.475 [2024-11-29 12:51:25.246779] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:25.475 [2024-11-29 12:51:25.246831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816019 ] 00:07:25.733 [2024-11-29 12:51:25.307959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.733 [2024-11-29 12:51:25.350767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:25.990 12:51:25 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:25.990 { 00:07:25.990 "version": "SPDK v25.01-pre git sha1 0b658ecad", 00:07:25.990 "fields": { 00:07:25.990 "major": 25, 00:07:25.990 "minor": 1, 00:07:25.990 "patch": 0, 00:07:25.990 "suffix": "-pre", 00:07:25.990 "commit": "0b658ecad" 00:07:25.990 } 00:07:25.990 } 00:07:25.990 12:51:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:25.990 12:51:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:25.990 12:51:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:25.990 12:51:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:25.990 12:51:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.990 12:51:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:25.990 12:51:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.990 12:51:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:25.990 12:51:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:25.990 12:51:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:25.990 12:51:25 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:26.248 request: 00:07:26.248 { 00:07:26.248 "method": "env_dpdk_get_mem_stats", 00:07:26.248 "req_id": 1 00:07:26.248 } 00:07:26.248 Got JSON-RPC error response 00:07:26.248 response: 00:07:26.248 { 00:07:26.248 "code": -32601, 00:07:26.248 "message": "Method not found" 00:07:26.248 } 00:07:26.248 12:51:25 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:26.248 12:51:25 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.248 12:51:25 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.248 12:51:25 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.248 12:51:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1816019 00:07:26.248 12:51:25 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1816019 ']' 00:07:26.248 12:51:25 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1816019 00:07:26.248 12:51:25 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:26.248 12:51:25 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.248 12:51:25 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1816019 00:07:26.248 12:51:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.248 12:51:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.248 12:51:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1816019' 00:07:26.248 killing process with pid 1816019 00:07:26.248 12:51:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 1816019 00:07:26.248 12:51:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 1816019 00:07:26.812 00:07:26.812 real 0m1.281s 00:07:26.812 user 0m1.505s 00:07:26.812 sys 0m0.424s 00:07:26.812 12:51:26 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.812 12:51:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:26.812 ************************************ 00:07:26.812 END TEST app_cmdline 00:07:26.812 ************************************ 00:07:26.812 12:51:26 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:26.812 12:51:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.812 12:51:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.812 12:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:26.812 ************************************ 00:07:26.812 START TEST version 00:07:26.812 ************************************ 00:07:26.812 12:51:26 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:26.812 * Looking for test storage... 00:07:26.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:26.812 12:51:26 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:26.812 12:51:26 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:26.812 12:51:26 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:26.812 12:51:26 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:26.812 12:51:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.812 12:51:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.813 12:51:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.813 12:51:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.813 12:51:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.813 12:51:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.813 12:51:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.813 12:51:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.813 12:51:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.813 12:51:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.813 12:51:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.813 12:51:26 version -- scripts/common.sh@344 -- # case "$op" in 00:07:26.813 12:51:26 version -- scripts/common.sh@345 -- # : 1 00:07:26.813 12:51:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.813 12:51:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.813 12:51:26 version -- scripts/common.sh@365 -- # decimal 1 00:07:26.813 12:51:26 version -- scripts/common.sh@353 -- # local d=1 00:07:26.813 12:51:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.813 12:51:26 version -- scripts/common.sh@355 -- # echo 1 00:07:26.813 12:51:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.813 12:51:26 version -- scripts/common.sh@366 -- # decimal 2 00:07:26.813 12:51:26 version -- scripts/common.sh@353 -- # local d=2 00:07:26.813 12:51:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.813 12:51:26 version -- scripts/common.sh@355 -- # echo 2 00:07:26.813 12:51:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.813 12:51:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.813 12:51:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.813 12:51:26 version -- scripts/common.sh@368 -- # return 0 00:07:26.813 12:51:26 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.813 12:51:26 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:26.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.813 --rc genhtml_branch_coverage=1 00:07:26.813 --rc genhtml_function_coverage=1 00:07:26.813 --rc genhtml_legend=1 00:07:26.813 --rc geninfo_all_blocks=1 00:07:26.813 --rc geninfo_unexecuted_blocks=1 00:07:26.813 00:07:26.813 ' 00:07:26.813 12:51:26 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:26.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.813 --rc genhtml_branch_coverage=1 00:07:26.813 --rc genhtml_function_coverage=1 00:07:26.813 --rc genhtml_legend=1 00:07:26.813 --rc geninfo_all_blocks=1 00:07:26.813 --rc geninfo_unexecuted_blocks=1 00:07:26.813 00:07:26.813 ' 00:07:26.813 12:51:26 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:26.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.813 --rc genhtml_branch_coverage=1 00:07:26.813 --rc genhtml_function_coverage=1 00:07:26.813 --rc genhtml_legend=1 00:07:26.813 --rc geninfo_all_blocks=1 00:07:26.813 --rc geninfo_unexecuted_blocks=1 00:07:26.813 00:07:26.813 ' 00:07:26.813 12:51:26 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:26.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.813 --rc genhtml_branch_coverage=1 00:07:26.813 --rc genhtml_function_coverage=1 00:07:26.813 --rc genhtml_legend=1 00:07:26.813 --rc geninfo_all_blocks=1 00:07:26.813 --rc geninfo_unexecuted_blocks=1 00:07:26.813 00:07:26.813 ' 00:07:26.813 12:51:26 version -- app/version.sh@17 -- # get_header_version major 00:07:26.813 12:51:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:26.813 12:51:26 version -- app/version.sh@14 -- # cut -f2 00:07:26.813 12:51:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:26.813 12:51:26 version -- app/version.sh@17 -- # major=25 00:07:26.813 12:51:26 version -- app/version.sh@18 -- # get_header_version minor 00:07:26.813 12:51:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:26.813 12:51:26 version -- app/version.sh@14 -- # cut -f2 00:07:26.813 12:51:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:26.813 12:51:26 version -- app/version.sh@18 -- # minor=1 00:07:26.813 12:51:26 version -- app/version.sh@19 -- # get_header_version patch 00:07:26.813 12:51:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:26.813 12:51:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:26.813 12:51:26 version -- app/version.sh@14 -- # cut -f2 00:07:26.813 12:51:26 version -- app/version.sh@19 -- # patch=0 00:07:26.813 12:51:26 version -- app/version.sh@20 -- # get_header_version suffix 00:07:26.813 12:51:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:26.813 12:51:26 version -- app/version.sh@14 -- # cut -f2 00:07:26.813 12:51:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:26.813 12:51:26 version -- app/version.sh@20 -- # suffix=-pre 00:07:26.813 12:51:26 version -- app/version.sh@22 -- # version=25.1 00:07:26.813 12:51:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:26.813 12:51:26 version -- app/version.sh@28 -- # version=25.1rc0 00:07:26.813 12:51:26 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:26.813 12:51:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:26.813 12:51:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:26.813 12:51:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:26.813 00:07:26.813 real 0m0.235s 00:07:26.813 user 0m0.149s 00:07:26.813 sys 0m0.125s 00:07:26.813 12:51:26 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.813 12:51:26 version -- common/autotest_common.sh@10 -- # set +x 00:07:26.813 ************************************ 00:07:26.813 END TEST version 00:07:26.813 ************************************ 00:07:27.071 12:51:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:27.071 12:51:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:27.071 12:51:26 -- spdk/autotest.sh@194 -- # uname -s 00:07:27.071 12:51:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:27.071 12:51:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:27.071 12:51:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:27.071 12:51:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:27.071 12:51:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:27.071 12:51:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:27.071 12:51:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.071 12:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:27.071 12:51:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:27.071 12:51:26 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:27.071 12:51:26 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:27.071 12:51:26 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:27.071 12:51:26 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:27.071 12:51:26 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:27.071 12:51:26 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:27.071 12:51:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.071 12:51:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.071 12:51:26 -- common/autotest_common.sh@10 -- # set +x 00:07:27.071 ************************************ 00:07:27.071 START TEST nvmf_tcp 00:07:27.071 ************************************ 00:07:27.071 12:51:26 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:27.071 * Looking for test storage... 00:07:27.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:27.071 12:51:26 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.071 12:51:26 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.071 12:51:26 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.071 12:51:26 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.071 12:51:26 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.071 12:51:26 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.071 12:51:26 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.071 12:51:26 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.071 12:51:26 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.071 12:51:26 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.071 12:51:26 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.071 12:51:26 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.329 12:51:26 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:27.329 12:51:26 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.329 12:51:26 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.329 --rc genhtml_branch_coverage=1 00:07:27.329 --rc genhtml_function_coverage=1 00:07:27.329 --rc genhtml_legend=1 00:07:27.329 --rc geninfo_all_blocks=1 00:07:27.329 --rc geninfo_unexecuted_blocks=1 00:07:27.329 00:07:27.329 ' 00:07:27.329 12:51:26 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.329 --rc genhtml_branch_coverage=1 00:07:27.329 --rc genhtml_function_coverage=1 00:07:27.329 --rc genhtml_legend=1 00:07:27.329 --rc geninfo_all_blocks=1 00:07:27.329 --rc geninfo_unexecuted_blocks=1 00:07:27.329 00:07:27.329 ' 00:07:27.329 12:51:26 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.329 --rc genhtml_branch_coverage=1 00:07:27.329 --rc genhtml_function_coverage=1 00:07:27.329 --rc genhtml_legend=1 00:07:27.329 --rc geninfo_all_blocks=1 00:07:27.329 --rc geninfo_unexecuted_blocks=1 00:07:27.329 00:07:27.329 ' 00:07:27.329 12:51:26 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.329 --rc genhtml_branch_coverage=1 00:07:27.329 --rc genhtml_function_coverage=1 00:07:27.329 --rc genhtml_legend=1 00:07:27.329 --rc geninfo_all_blocks=1 00:07:27.329 --rc geninfo_unexecuted_blocks=1 00:07:27.329 00:07:27.329 ' 00:07:27.329 12:51:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:27.329 12:51:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:27.329 12:51:26 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:27.329 12:51:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.329 12:51:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.329 12:51:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.329 ************************************ 00:07:27.329 START TEST nvmf_target_core 00:07:27.329 ************************************ 00:07:27.329 12:51:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:27.329 * Looking for test storage... 00:07:27.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.330 --rc genhtml_branch_coverage=1 00:07:27.330 --rc genhtml_function_coverage=1 00:07:27.330 --rc genhtml_legend=1 00:07:27.330 --rc geninfo_all_blocks=1 00:07:27.330 --rc geninfo_unexecuted_blocks=1 00:07:27.330 00:07:27.330 ' 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.330 --rc genhtml_branch_coverage=1 00:07:27.330 --rc genhtml_function_coverage=1 00:07:27.330 --rc genhtml_legend=1 00:07:27.330 --rc geninfo_all_blocks=1 00:07:27.330 --rc geninfo_unexecuted_blocks=1 00:07:27.330 00:07:27.330 ' 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.330 --rc genhtml_branch_coverage=1 00:07:27.330 --rc genhtml_function_coverage=1 00:07:27.330 --rc genhtml_legend=1 00:07:27.330 --rc geninfo_all_blocks=1 00:07:27.330 --rc geninfo_unexecuted_blocks=1 00:07:27.330 00:07:27.330 ' 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.330 --rc genhtml_branch_coverage=1 00:07:27.330 --rc genhtml_function_coverage=1 00:07:27.330 --rc genhtml_legend=1 00:07:27.330 --rc geninfo_all_blocks=1 00:07:27.330 --rc geninfo_unexecuted_blocks=1 00:07:27.330 00:07:27.330 ' 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:27.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.330 12:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:27.588 ************************************ 00:07:27.588 START TEST nvmf_abort 00:07:27.588 ************************************ 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:27.588 * Looking for test storage... 00:07:27.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.588 --rc genhtml_branch_coverage=1 00:07:27.588 --rc genhtml_function_coverage=1 00:07:27.588 --rc genhtml_legend=1 00:07:27.588 --rc geninfo_all_blocks=1 00:07:27.588 --rc geninfo_unexecuted_blocks=1 00:07:27.588 00:07:27.588 ' 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.588 --rc genhtml_branch_coverage=1 00:07:27.588 --rc genhtml_function_coverage=1 00:07:27.588 --rc genhtml_legend=1 00:07:27.588 --rc geninfo_all_blocks=1 00:07:27.588 --rc geninfo_unexecuted_blocks=1 00:07:27.588 00:07:27.588 ' 00:07:27.588 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.588 --rc genhtml_branch_coverage=1 00:07:27.588 --rc genhtml_function_coverage=1 00:07:27.588 --rc genhtml_legend=1 00:07:27.588 --rc geninfo_all_blocks=1 00:07:27.588 --rc geninfo_unexecuted_blocks=1 00:07:27.588 00:07:27.588 ' 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.589 --rc genhtml_branch_coverage=1 00:07:27.589 --rc genhtml_function_coverage=1 00:07:27.589 --rc genhtml_legend=1 00:07:27.589 --rc geninfo_all_blocks=1 00:07:27.589 --rc geninfo_unexecuted_blocks=1 00:07:27.589 00:07:27.589 ' 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:27.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:27.589 12:51:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:34.148 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.148 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:34.149 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:34.149 Found net devices under 0000:86:00.0: cvl_0_0 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:34.149 Found net devices under 0000:86:00.1: cvl_0_1 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.149 12:51:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:34.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:07:34.149 00:07:34.149 --- 10.0.0.2 ping statistics --- 00:07:34.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.149 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:07:34.149 00:07:34.149 --- 10.0.0.1 ping statistics --- 00:07:34.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.149 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1819596 00:07:34.149 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1819596 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1819596 ']' 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.150 [2024-11-29 12:51:33.242305] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:34.150 [2024-11-29 12:51:33.242351] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.150 [2024-11-29 12:51:33.309853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.150 [2024-11-29 12:51:33.351719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.150 [2024-11-29 12:51:33.351759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.150 [2024-11-29 12:51:33.351766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.150 [2024-11-29 12:51:33.351773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.150 [2024-11-29 12:51:33.351778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.150 [2024-11-29 12:51:33.353108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.150 [2024-11-29 12:51:33.353193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.150 [2024-11-29 12:51:33.353195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.150 [2024-11-29 12:51:33.502731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.150 Malloc0 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.150 Delay0 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.150 [2024-11-29 12:51:33.582019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.150 12:51:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:34.150 [2024-11-29 12:51:33.750074] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:36.060 Initializing NVMe Controllers 00:07:36.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:36.060 controller IO queue size 128 less than required 00:07:36.060 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:36.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:36.060 Initialization complete. Launching workers. 00:07:36.060 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36096 00:07:36.060 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36157, failed to submit 62 00:07:36.060 success 36100, unsuccessful 57, failed 0 00:07:36.060 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:36.060 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.060 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.060 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.060 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:36.060 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:36.060 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:36.060 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:36.060 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:36.060 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:36.060 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:36.319 rmmod nvme_tcp 00:07:36.319 rmmod nvme_fabrics 00:07:36.319 rmmod nvme_keyring 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1819596 ']' 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1819596 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1819596 ']' 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1819596 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1819596 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1819596' 00:07:36.319 killing process with pid 1819596 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1819596 00:07:36.319 12:51:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1819596 00:07:36.579 12:51:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:36.579 12:51:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:36.579 12:51:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:36.579 12:51:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:36.579 12:51:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:36.579 12:51:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:36.579 12:51:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:36.579 12:51:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:36.579 12:51:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:36.579 12:51:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.579 12:51:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.579 12:51:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.484 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:38.484 00:07:38.484 real 0m11.099s 00:07:38.484 user 0m11.816s 00:07:38.484 sys 0m5.392s 00:07:38.484 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.484 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.484 ************************************ 00:07:38.484 END TEST nvmf_abort 00:07:38.484 ************************************ 00:07:38.484 12:51:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:38.484 12:51:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.484 12:51:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.484 12:51:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.743 ************************************ 00:07:38.743 START TEST nvmf_ns_hotplug_stress 00:07:38.743 ************************************ 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:38.743 * Looking for test storage... 00:07:38.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:38.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.743 --rc genhtml_branch_coverage=1 00:07:38.743 --rc genhtml_function_coverage=1 00:07:38.743 --rc genhtml_legend=1 00:07:38.743 --rc geninfo_all_blocks=1 00:07:38.743 --rc geninfo_unexecuted_blocks=1 00:07:38.743 00:07:38.743 ' 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:38.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.743 --rc genhtml_branch_coverage=1 00:07:38.743 --rc genhtml_function_coverage=1 00:07:38.743 --rc genhtml_legend=1 00:07:38.743 --rc geninfo_all_blocks=1 00:07:38.743 --rc geninfo_unexecuted_blocks=1 00:07:38.743 00:07:38.743 ' 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:38.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.743 --rc genhtml_branch_coverage=1 00:07:38.743 --rc genhtml_function_coverage=1 00:07:38.743 --rc genhtml_legend=1 00:07:38.743 --rc geninfo_all_blocks=1 00:07:38.743 --rc geninfo_unexecuted_blocks=1 00:07:38.743 00:07:38.743 ' 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:38.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.743 --rc genhtml_branch_coverage=1 00:07:38.743 --rc genhtml_function_coverage=1 00:07:38.743 --rc genhtml_legend=1 00:07:38.743 --rc geninfo_all_blocks=1 00:07:38.743 --rc geninfo_unexecuted_blocks=1 00:07:38.743 00:07:38.743 ' 00:07:38.743 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:38.744 12:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:45.313 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:45.313 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:45.313 Found net devices under 0000:86:00.0: cvl_0_0 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:45.313 Found net devices under 0000:86:00.1: cvl_0_1 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.313 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:45.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:07:45.314 00:07:45.314 --- 10.0.0.2 ping statistics --- 00:07:45.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.314 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:07:45.314 00:07:45.314 --- 10.0.0.1 ping statistics --- 00:07:45.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.314 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1823715 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1823715 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1823715 ']' 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.314 [2024-11-29 12:51:44.418177] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:07:45.314 [2024-11-29 12:51:44.418220] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.314 [2024-11-29 12:51:44.484909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.314 [2024-11-29 12:51:44.524419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.314 [2024-11-29 12:51:44.524455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.314 [2024-11-29 12:51:44.524462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.314 [2024-11-29 12:51:44.524468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.314 [2024-11-29 12:51:44.524474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.314 [2024-11-29 12:51:44.525921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.314 [2024-11-29 12:51:44.525941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.314 [2024-11-29 12:51:44.525943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:45.314 [2024-11-29 12:51:44.840391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.314 12:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:45.314 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.572 [2024-11-29 12:51:45.233818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.572 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:45.830 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:46.089 Malloc0 00:07:46.089 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:46.089 Delay0 00:07:46.089 12:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.347 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:46.605 NULL1 00:07:46.605 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:46.863 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:46.863 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1823986 00:07:46.863 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:46.863 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.122 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.122 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:47.122 12:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:47.381 true 00:07:47.381 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:47.381 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.641 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.899 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:47.899 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:48.158 true 00:07:48.158 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:48.158 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.158 12:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.416 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:48.416 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:48.675 true 00:07:48.675 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:48.675 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.934 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.193 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:49.193 12:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:49.193 true 00:07:49.451 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:49.451 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.451 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.709 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:49.709 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:49.968 true 00:07:49.968 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:49.968 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.227 12:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.485 12:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:50.485 12:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:50.485 true 00:07:50.744 12:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:50.744 12:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.744 12:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.002 12:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:51.003 12:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:51.261 true 00:07:51.261 12:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:51.261 12:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.520 12:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.779 12:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:51.779 12:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:51.779 true 00:07:52.037 12:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:52.037 12:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.037 12:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.296 12:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:52.296 12:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:52.554 true 00:07:52.554 12:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:52.554 12:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.813 12:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.073 12:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:53.073 12:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:53.073 true 00:07:53.331 12:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:53.332 12:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.332 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.590 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:53.590 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:53.848 true 00:07:53.848 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:53.848 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.105 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.363 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:54.363 12:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:54.363 true 00:07:54.621 12:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:54.621 12:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.621 12:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.878 12:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:54.878 12:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:55.136 true 00:07:55.136 12:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:55.136 12:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.393 12:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.650 12:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:55.650 12:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:55.650 true 00:07:55.650 12:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:55.650 12:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.907 12:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.165 12:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:56.165 12:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:56.422 true 00:07:56.422 12:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:56.422 12:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.680 12:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.680 12:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:56.680 12:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:56.937 true 00:07:56.937 12:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:56.937 12:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.195 12:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.453 12:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:57.453 12:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:57.712 true 00:07:57.712 12:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:57.712 12:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.971 12:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.971 12:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:57.971 12:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:58.229 true 00:07:58.229 12:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:58.229 12:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.489 12:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.751 12:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:58.751 12:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:59.009 true 00:07:59.009 12:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:59.009 12:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.009 12:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.268 12:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:59.268 12:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:59.528 true 00:07:59.528 12:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:07:59.528 12:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.787 12:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.046 12:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:00.046 12:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:00.046 true 00:08:00.046 12:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:00.046 12:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.305 12:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.565 12:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:00.565 12:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:00.825 true 00:08:00.825 12:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:00.825 12:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.083 12:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.341 12:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:01.341 12:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:01.341 true 00:08:01.341 12:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:01.341 12:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.599 12:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.859 12:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:01.859 12:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:02.118 true 00:08:02.118 12:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:02.118 12:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.377 12:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.377 12:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:02.377 12:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:02.636 true 00:08:02.636 12:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:02.636 12:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.894 12:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.152 12:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:03.152 12:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:03.411 true 00:08:03.411 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:03.411 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.671 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.671 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:03.671 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:03.931 true 00:08:03.931 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:03.931 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.189 12:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.448 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:04.448 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:04.707 true 00:08:04.707 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:04.707 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.707 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.965 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:04.965 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:05.224 true 00:08:05.224 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:05.224 12:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.484 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.744 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:05.744 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:06.002 true 00:08:06.002 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:06.002 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.002 12:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.260 12:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:08:06.260 12:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:08:06.518 true 00:08:06.518 12:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:06.518 12:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.776 12:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.034 12:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:07.034 12:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:07.034 true 00:08:07.293 12:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:07.293 12:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.293 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.566 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:07.566 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:07.849 true 00:08:07.849 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:07.849 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.144 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.144 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:08:08.144 12:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:08:08.422 true 00:08:08.422 12:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:08.422 12:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.708 12:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.967 12:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:08:08.968 12:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:08:08.968 true 00:08:08.968 12:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:08.968 12:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.226 12:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.485 12:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:08:09.485 12:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:08:09.742 true 00:08:09.742 12:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:09.742 12:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.001 12:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.260 12:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:08:10.260 12:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:08:10.260 true 00:08:10.260 12:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:10.260 12:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.517 12:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.776 12:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:08:10.776 12:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:08:11.033 true 00:08:11.033 12:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:11.033 12:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.291 12:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.549 12:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:08:11.549 12:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:08:11.549 true 00:08:11.809 12:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:11.809 12:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.809 12:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.068 12:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:08:12.068 12:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:08:12.327 true 00:08:12.327 12:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:12.327 12:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.586 12:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.844 12:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:08:12.844 12:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:08:12.844 true 00:08:12.844 12:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:12.844 12:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.104 12:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.362 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:08:13.362 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:08:13.621 true 00:08:13.621 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:13.621 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.880 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.880 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:08:14.139 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:08:14.139 true 00:08:14.139 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:14.139 12:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.397 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.655 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:08:14.655 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:08:14.915 true 00:08:14.915 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:14.915 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.174 12:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.432 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:08:15.432 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:08:15.432 true 00:08:15.432 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:15.432 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.690 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.949 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:08:15.949 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:08:16.208 true 00:08:16.208 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:16.208 12:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.466 12:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.724 12:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:08:16.724 12:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:08:16.724 true 00:08:16.724 12:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:16.724 12:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.983 12:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.983 Initializing NVMe Controllers 00:08:16.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:16.983 Controller IO queue size 128, less than required. 00:08:16.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:16.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:16.983 Initialization complete. Launching workers. 00:08:16.983 ======================================================== 00:08:16.983 Latency(us) 00:08:16.983 Device Information : IOPS MiB/s Average min max 00:08:16.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26496.32 12.94 4830.81 2712.57 8200.66 00:08:16.983 ======================================================== 00:08:16.983 Total : 26496.32 12.94 4830.81 2712.57 8200.66 00:08:16.983 00:08:17.243 12:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:08:17.243 12:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:08:17.502 true 00:08:17.502 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1823986 00:08:17.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1823986) - No such process 00:08:17.502 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1823986 00:08:17.502 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.761 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.761 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:17.761 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:17.761 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:17.761 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.761 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:18.020 null0 00:08:18.020 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:18.020 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:18.020 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:18.279 null1 00:08:18.279 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:18.279 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:18.279 12:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:18.538 null2 00:08:18.538 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:18.538 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:18.538 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:18.538 null3 00:08:18.538 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:18.538 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:18.538 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:18.797 null4 00:08:18.797 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:18.797 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:18.797 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:19.056 null5 00:08:19.056 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:19.056 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:19.056 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:19.315 null6 00:08:19.315 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:19.315 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:19.315 12:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:19.315 null7 00:08:19.574 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1829660 1829661 1829663 1829665 1829667 1829669 1829671 1829672 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.575 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.834 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.835 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.835 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.835 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.835 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.835 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.835 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.835 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.093 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.093 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.093 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.093 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.093 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.093 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.093 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.093 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.352 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.352 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.352 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.352 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.352 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.352 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.352 12:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.352 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.611 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.870 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.870 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.870 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.870 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.870 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.870 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.870 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.871 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.130 12:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.389 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.389 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.389 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.389 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.389 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.389 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.389 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.389 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.648 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.648 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.648 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.648 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.648 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.648 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.648 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.648 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.649 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.908 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.167 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.167 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.167 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.167 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.167 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.167 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.167 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.167 12:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.426 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.684 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.684 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.684 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.684 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.684 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.684 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.684 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.684 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.684 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.684 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.684 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.944 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.203 12:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:23.462 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:23.462 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:23.462 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.462 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.462 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.462 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:23.462 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:23.462 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.720 rmmod nvme_tcp 00:08:23.720 rmmod nvme_fabrics 00:08:23.720 rmmod nvme_keyring 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1823715 ']' 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1823715 00:08:23.720 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1823715 ']' 00:08:23.721 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1823715 00:08:23.721 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:23.721 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.721 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1823715 00:08:23.721 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:23.721 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:23.721 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1823715' 00:08:23.721 killing process with pid 1823715 00:08:23.721 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1823715 00:08:23.721 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1823715 00:08:24.020 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.020 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:24.020 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:24.020 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:24.020 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:24.020 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:24.020 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:24.020 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.020 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.020 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.020 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.020 12:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.923 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:26.182 00:08:26.182 real 0m47.412s 00:08:26.182 user 3m22.924s 00:08:26.182 sys 0m17.217s 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.182 ************************************ 00:08:26.182 END TEST nvmf_ns_hotplug_stress 00:08:26.182 ************************************ 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.182 ************************************ 00:08:26.182 START TEST nvmf_delete_subsystem 00:08:26.182 ************************************ 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:26.182 * Looking for test storage... 00:08:26.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:26.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.182 --rc genhtml_branch_coverage=1 00:08:26.182 --rc genhtml_function_coverage=1 00:08:26.182 --rc genhtml_legend=1 00:08:26.182 --rc geninfo_all_blocks=1 00:08:26.182 --rc geninfo_unexecuted_blocks=1 00:08:26.182 00:08:26.182 ' 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:26.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.182 --rc genhtml_branch_coverage=1 00:08:26.182 --rc genhtml_function_coverage=1 00:08:26.182 --rc genhtml_legend=1 00:08:26.182 --rc geninfo_all_blocks=1 00:08:26.182 --rc geninfo_unexecuted_blocks=1 00:08:26.182 00:08:26.182 ' 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:26.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.182 --rc genhtml_branch_coverage=1 00:08:26.182 --rc genhtml_function_coverage=1 00:08:26.182 --rc genhtml_legend=1 00:08:26.182 --rc geninfo_all_blocks=1 00:08:26.182 --rc geninfo_unexecuted_blocks=1 00:08:26.182 00:08:26.182 ' 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:26.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.182 --rc genhtml_branch_coverage=1 00:08:26.182 --rc genhtml_function_coverage=1 00:08:26.182 --rc genhtml_legend=1 00:08:26.182 --rc geninfo_all_blocks=1 00:08:26.182 --rc geninfo_unexecuted_blocks=1 00:08:26.182 00:08:26.182 ' 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.182 12:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.182 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:26.182 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:26.442 12:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.712 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:31.713 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:31.713 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:31.713 Found net devices under 0000:86:00.0: cvl_0_0 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:31.713 Found net devices under 0000:86:00.1: cvl_0_1 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:31.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:08:31.713 00:08:31.713 --- 10.0.0.2 ping statistics --- 00:08:31.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.713 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:31.713 00:08:31.713 --- 10.0.0.1 ping statistics --- 00:08:31.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.713 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1834051 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1834051 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1834051 ']' 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.713 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.713 [2024-11-29 12:52:31.512500] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:08:31.713 [2024-11-29 12:52:31.512543] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.972 [2024-11-29 12:52:31.578994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:31.972 [2024-11-29 12:52:31.620491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.972 [2024-11-29 12:52:31.620530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.972 [2024-11-29 12:52:31.620538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.972 [2024-11-29 12:52:31.620544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.972 [2024-11-29 12:52:31.620549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.972 [2024-11-29 12:52:31.621778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.972 [2024-11-29 12:52:31.621781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.972 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.972 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:31.972 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:31.972 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:31.972 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.972 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.972 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.972 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.972 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.972 [2024-11-29 12:52:31.759938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.972 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.972 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:31.972 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.972 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.973 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.973 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.973 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.973 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.973 [2024-11-29 12:52:31.776116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.973 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.973 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:31.973 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.973 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.973 NULL1 00:08:31.973 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.973 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:31.973 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.973 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.231 Delay0 00:08:32.231 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.231 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.231 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.231 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:32.231 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.231 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1834072 00:08:32.231 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:32.231 12:52:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:32.231 [2024-11-29 12:52:31.860785] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:34.135 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.135 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.135 12:52:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 [2024-11-29 12:52:33.902441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bde680 is same with the state(6) to be set 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 starting I/O failed: -6 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 [2024-11-29 12:52:33.902796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa16800d020 is same with the state(6) to be set 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Read completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.135 Write completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Write completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:34.136 Read completed with error (sct=0, sc=8) 00:08:35.071 [2024-11-29 12:52:34.873656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdf9b0 is same with the state(6) to be set 00:08:35.330 Write completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Write completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Write completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Write completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Write completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Write completed with error (sct=0, sc=8) 00:08:35.330 Write completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Write completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Write completed with error (sct=0, sc=8) 00:08:35.330 Write completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Write completed with error (sct=0, sc=8) 00:08:35.330 Write completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 [2024-11-29 12:52:34.904530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bde2c0 is same with the state(6) to be set 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.330 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 [2024-11-29 12:52:34.904704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bde4a0 is same with the state(6) to be set 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 [2024-11-29 12:52:34.904873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bde860 is same with the state(6) to be set 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Write completed with error (sct=0, sc=8) 00:08:35.331 Read completed with error (sct=0, sc=8) 00:08:35.331 [2024-11-29 12:52:34.905536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa16800d350 is same with the state(6) to be set 00:08:35.331 Initializing NVMe Controllers 00:08:35.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:35.331 Controller IO queue size 128, less than required. 00:08:35.331 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:35.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:35.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:35.331 Initialization complete. Launching workers. 00:08:35.331 ======================================================== 00:08:35.331 Latency(us) 00:08:35.331 Device Information : IOPS MiB/s Average min max 00:08:35.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.51 0.10 943644.57 489.56 1012179.61 00:08:35.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.80 0.08 867496.64 247.60 1012374.60 00:08:35.331 ======================================================== 00:08:35.331 Total : 353.31 0.17 909634.68 247.60 1012374.60 00:08:35.331 00:08:35.331 [2024-11-29 12:52:34.906287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdf9b0 (9): Bad file descriptor 00:08:35.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:35.331 12:52:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.331 12:52:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:35.331 12:52:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1834072 00:08:35.331 12:52:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:35.897 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:35.897 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1834072 00:08:35.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1834072) - No such process 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1834072 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1834072 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1834072 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.898 [2024-11-29 12:52:35.434324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1834765 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1834765 00:08:35.898 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:35.898 [2024-11-29 12:52:35.503999] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:36.157 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:36.157 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1834765 00:08:36.157 12:52:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:36.725 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:36.725 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1834765 00:08:36.725 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:37.293 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:37.293 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1834765 00:08:37.293 12:52:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:37.861 12:52:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:37.861 12:52:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1834765 00:08:37.861 12:52:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:38.428 12:52:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:38.428 12:52:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1834765 00:08:38.428 12:52:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:38.686 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:38.686 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1834765 00:08:38.686 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:38.945 Initializing NVMe Controllers 00:08:38.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:38.945 Controller IO queue size 128, less than required. 00:08:38.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:38.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:38.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:38.945 Initialization complete. Launching workers. 00:08:38.945 ======================================================== 00:08:38.945 Latency(us) 00:08:38.945 Device Information : IOPS MiB/s Average min max 00:08:38.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003188.16 1000151.60 1041199.50 00:08:38.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005365.18 1000242.51 1042018.60 00:08:38.945 ======================================================== 00:08:38.945 Total : 256.00 0.12 1004276.67 1000151.60 1042018.60 00:08:38.945 00:08:39.204 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:39.204 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1834765 00:08:39.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1834765) - No such process 00:08:39.204 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1834765 00:08:39.204 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:39.204 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:39.204 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.204 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:39.204 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.204 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:39.204 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.204 12:52:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.204 rmmod nvme_tcp 00:08:39.204 rmmod nvme_fabrics 00:08:39.204 rmmod nvme_keyring 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1834051 ']' 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1834051 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1834051 ']' 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1834051 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1834051 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1834051' 00:08:39.463 killing process with pid 1834051 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1834051 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1834051 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.463 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.464 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.464 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.464 12:52:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:42.001 00:08:42.001 real 0m15.532s 00:08:42.001 user 0m28.749s 00:08:42.001 sys 0m5.058s 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:42.001 ************************************ 00:08:42.001 END TEST nvmf_delete_subsystem 00:08:42.001 ************************************ 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.001 ************************************ 00:08:42.001 START TEST nvmf_host_management 00:08:42.001 ************************************ 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:42.001 * Looking for test storage... 00:08:42.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.001 --rc genhtml_branch_coverage=1 00:08:42.001 --rc genhtml_function_coverage=1 00:08:42.001 --rc genhtml_legend=1 00:08:42.001 --rc geninfo_all_blocks=1 00:08:42.001 --rc geninfo_unexecuted_blocks=1 00:08:42.001 00:08:42.001 ' 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.001 --rc genhtml_branch_coverage=1 00:08:42.001 --rc genhtml_function_coverage=1 00:08:42.001 --rc genhtml_legend=1 00:08:42.001 --rc geninfo_all_blocks=1 00:08:42.001 --rc geninfo_unexecuted_blocks=1 00:08:42.001 00:08:42.001 ' 00:08:42.001 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.001 --rc genhtml_branch_coverage=1 00:08:42.001 --rc genhtml_function_coverage=1 00:08:42.002 --rc genhtml_legend=1 00:08:42.002 --rc geninfo_all_blocks=1 00:08:42.002 --rc geninfo_unexecuted_blocks=1 00:08:42.002 00:08:42.002 ' 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.002 --rc genhtml_branch_coverage=1 00:08:42.002 --rc genhtml_function_coverage=1 00:08:42.002 --rc genhtml_legend=1 00:08:42.002 --rc geninfo_all_blocks=1 00:08:42.002 --rc geninfo_unexecuted_blocks=1 00:08:42.002 00:08:42.002 ' 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.002 12:52:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:47.285 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:47.285 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:47.285 Found net devices under 0000:86:00.0: cvl_0_0 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:47.285 Found net devices under 0000:86:00.1: cvl_0_1 00:08:47.285 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.286 12:52:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:08:47.286 00:08:47.286 --- 10.0.0.2 ping statistics --- 00:08:47.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.286 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:08:47.286 00:08:47.286 --- 10.0.0.1 ping statistics --- 00:08:47.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.286 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:47.286 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1838776 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1838776 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1838776 ']' 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.545 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.545 [2024-11-29 12:52:47.192305] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:08:47.545 [2024-11-29 12:52:47.192348] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.545 [2024-11-29 12:52:47.258478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.545 [2024-11-29 12:52:47.305098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.545 [2024-11-29 12:52:47.305133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.545 [2024-11-29 12:52:47.305141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.545 [2024-11-29 12:52:47.305148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.545 [2024-11-29 12:52:47.305153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.545 [2024-11-29 12:52:47.306787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.545 [2024-11-29 12:52:47.306871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.545 [2024-11-29 12:52:47.306993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.545 [2024-11-29 12:52:47.306993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.804 [2024-11-29 12:52:47.457615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.804 Malloc0 00:08:47.804 [2024-11-29 12:52:47.532659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1838979 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1838979 /var/tmp/bdevperf.sock 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1838979 ']' 00:08:47.804 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:47.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:47.805 { 00:08:47.805 "params": { 00:08:47.805 "name": "Nvme$subsystem", 00:08:47.805 "trtype": "$TEST_TRANSPORT", 00:08:47.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.805 "adrfam": "ipv4", 00:08:47.805 "trsvcid": "$NVMF_PORT", 00:08:47.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.805 "hdgst": ${hdgst:-false}, 00:08:47.805 "ddgst": ${ddgst:-false} 00:08:47.805 }, 00:08:47.805 "method": "bdev_nvme_attach_controller" 00:08:47.805 } 00:08:47.805 EOF 00:08:47.805 )") 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:47.805 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:47.805 "params": { 00:08:47.805 "name": "Nvme0", 00:08:47.805 "trtype": "tcp", 00:08:47.805 "traddr": "10.0.0.2", 00:08:47.805 "adrfam": "ipv4", 00:08:47.805 "trsvcid": "4420", 00:08:47.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:47.805 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:47.805 "hdgst": false, 00:08:47.805 "ddgst": false 00:08:47.805 }, 00:08:47.805 "method": "bdev_nvme_attach_controller" 00:08:47.805 }' 00:08:48.064 [2024-11-29 12:52:47.629838] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:08:48.064 [2024-11-29 12:52:47.629883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838979 ] 00:08:48.064 [2024-11-29 12:52:47.693431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.064 [2024-11-29 12:52:47.735002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.323 Running I/O for 10 seconds... 00:08:48.323 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.323 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:48.323 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:08:48.324 12:52:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=654 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 654 -ge 100 ']' 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.584 [2024-11-29 12:52:48.299076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a40b0 is same with the state(6) to be set 00:08:48.584 [2024-11-29 12:52:48.299119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a40b0 is same with the state(6) to be set 00:08:48.584 [2024-11-29 12:52:48.299127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a40b0 is same with the state(6) to be set 00:08:48.584 [2024-11-29 12:52:48.299133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a40b0 is same with the state(6) to be set 00:08:48.584 [2024-11-29 12:52:48.299140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a40b0 is same with the state(6) to be set 00:08:48.584 [2024-11-29 12:52:48.299146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a40b0 is same with the state(6) to be set 00:08:48.584 [2024-11-29 12:52:48.299153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a40b0 is same with the state(6) to be set 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.584 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.584 [2024-11-29 12:52:48.306491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:48.584 [2024-11-29 12:52:48.306524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:48.584 [2024-11-29 12:52:48.306542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:48.584 [2024-11-29 12:52:48.306557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:48.584 [2024-11-29 12:52:48.306572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x554510 is same with the state(6) to be set 00:08:48.584 [2024-11-29 12:52:48.306662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.584 [2024-11-29 12:52:48.306671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.584 [2024-11-29 12:52:48.306695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.584 [2024-11-29 12:52:48.306711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.584 [2024-11-29 12:52:48.306732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.584 [2024-11-29 12:52:48.306748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.584 [2024-11-29 12:52:48.306764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.584 [2024-11-29 12:52:48.306779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.584 [2024-11-29 12:52:48.306793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.584 [2024-11-29 12:52:48.306808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.584 [2024-11-29 12:52:48.306824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.584 [2024-11-29 12:52:48.306840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.584 [2024-11-29 12:52:48.306855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.584 [2024-11-29 12:52:48.306870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.584 [2024-11-29 12:52:48.306879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.306887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.306896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.306903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.306912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.306920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.306928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.306936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.306944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.306958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.306966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.306973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.306981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.306988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.306996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.585 [2024-11-29 12:52:48.307475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.585 [2024-11-29 12:52:48.307483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.586 [2024-11-29 12:52:48.307489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.586 [2024-11-29 12:52:48.307503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.586 [2024-11-29 12:52:48.307510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.586 [2024-11-29 12:52:48.307518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.586 [2024-11-29 12:52:48.307525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.586 [2024-11-29 12:52:48.307533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.586 [2024-11-29 12:52:48.307540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.586 [2024-11-29 12:52:48.307548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.586 [2024-11-29 12:52:48.307554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.586 [2024-11-29 12:52:48.307562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.586 [2024-11-29 12:52:48.307569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.586 [2024-11-29 12:52:48.307578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.586 [2024-11-29 12:52:48.307585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.586 [2024-11-29 12:52:48.307593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.586 [2024-11-29 12:52:48.307601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.586 [2024-11-29 12:52:48.307609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.586 [2024-11-29 12:52:48.307615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.586 [2024-11-29 12:52:48.307624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.586 [2024-11-29 12:52:48.307631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.586 [2024-11-29 12:52:48.307639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.586 [2024-11-29 12:52:48.307646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.586 [2024-11-29 12:52:48.307654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:48.586 [2024-11-29 12:52:48.307660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.586 [2024-11-29 12:52:48.308621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:48.586 task offset: 98304 on job bdev=Nvme0n1 fails 00:08:48.586 00:08:48.586 Latency(us) 00:08:48.586 [2024-11-29T11:52:48.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.586 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:48.586 Job: Nvme0n1 ended in about 0.41 seconds with error 00:08:48.586 Verification LBA range: start 0x0 length 0x400 00:08:48.586 Nvme0n1 : 0.41 1885.46 117.84 157.12 0.00 30488.90 1695.39 27810.06 00:08:48.586 [2024-11-29T11:52:48.406Z] =================================================================================================================== 00:08:48.586 [2024-11-29T11:52:48.406Z] Total : 1885.46 117.84 157.12 0.00 30488.90 1695.39 27810.06 00:08:48.586 [2024-11-29 12:52:48.311031] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:48.586 [2024-11-29 12:52:48.311053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x554510 (9): Bad file descriptor 00:08:48.586 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.586 12:52:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:48.586 [2024-11-29 12:52:48.317189] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:49.523 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1838979 00:08:49.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1838979) - No such process 00:08:49.523 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:49.523 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:49.523 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:49.523 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:49.523 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:49.523 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:49.524 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:49.524 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:49.524 { 00:08:49.524 "params": { 00:08:49.524 "name": "Nvme$subsystem", 00:08:49.524 "trtype": "$TEST_TRANSPORT", 00:08:49.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.524 "adrfam": "ipv4", 00:08:49.524 "trsvcid": "$NVMF_PORT", 00:08:49.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.524 "hdgst": ${hdgst:-false}, 00:08:49.524 "ddgst": ${ddgst:-false} 00:08:49.524 }, 00:08:49.524 "method": "bdev_nvme_attach_controller" 00:08:49.524 } 00:08:49.524 EOF 00:08:49.524 )") 00:08:49.524 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:49.524 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:49.524 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:49.524 12:52:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:49.524 "params": { 00:08:49.524 "name": "Nvme0", 00:08:49.524 "trtype": "tcp", 00:08:49.524 "traddr": "10.0.0.2", 00:08:49.524 "adrfam": "ipv4", 00:08:49.524 "trsvcid": "4420", 00:08:49.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:49.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:49.524 "hdgst": false, 00:08:49.524 "ddgst": false 00:08:49.524 }, 00:08:49.524 "method": "bdev_nvme_attach_controller" 00:08:49.524 }' 00:08:49.783 [2024-11-29 12:52:49.367239] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:08:49.783 [2024-11-29 12:52:49.367283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1839284 ] 00:08:49.783 [2024-11-29 12:52:49.429943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.783 [2024-11-29 12:52:49.469461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.042 Running I/O for 1 seconds... 00:08:50.980 1920.00 IOPS, 120.00 MiB/s 00:08:50.980 Latency(us) 00:08:50.980 [2024-11-29T11:52:50.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.980 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:50.980 Verification LBA range: start 0x0 length 0x400 00:08:50.980 Nvme0n1 : 1.00 1976.35 123.52 0.00 0.00 31869.85 7265.95 27126.21 00:08:50.980 [2024-11-29T11:52:50.800Z] =================================================================================================================== 00:08:50.980 [2024-11-29T11:52:50.800Z] Total : 1976.35 123.52 0.00 0.00 31869.85 7265.95 27126.21 00:08:51.238 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:51.238 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:51.238 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:51.238 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:51.238 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.239 rmmod nvme_tcp 00:08:51.239 rmmod nvme_fabrics 00:08:51.239 rmmod nvme_keyring 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1838776 ']' 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1838776 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1838776 ']' 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1838776 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1838776 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1838776' 00:08:51.239 killing process with pid 1838776 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1838776 00:08:51.239 12:52:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1838776 00:08:51.497 [2024-11-29 12:52:51.086184] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:51.497 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.497 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.498 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.498 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:51.498 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:51.498 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.498 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.498 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.498 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.498 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.498 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.498 12:52:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.403 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.403 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:53.403 00:08:53.403 real 0m11.771s 00:08:53.403 user 0m18.719s 00:08:53.403 sys 0m5.259s 00:08:53.403 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.403 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.403 ************************************ 00:08:53.403 END TEST nvmf_host_management 00:08:53.403 ************************************ 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.673 ************************************ 00:08:53.673 START TEST nvmf_lvol 00:08:53.673 ************************************ 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:53.673 * Looking for test storage... 00:08:53.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:53.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.673 --rc genhtml_branch_coverage=1 00:08:53.673 --rc genhtml_function_coverage=1 00:08:53.673 --rc genhtml_legend=1 00:08:53.673 --rc geninfo_all_blocks=1 00:08:53.673 --rc geninfo_unexecuted_blocks=1 00:08:53.673 00:08:53.673 ' 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:53.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.673 --rc genhtml_branch_coverage=1 00:08:53.673 --rc genhtml_function_coverage=1 00:08:53.673 --rc genhtml_legend=1 00:08:53.673 --rc geninfo_all_blocks=1 00:08:53.673 --rc geninfo_unexecuted_blocks=1 00:08:53.673 00:08:53.673 ' 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:53.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.673 --rc genhtml_branch_coverage=1 00:08:53.673 --rc genhtml_function_coverage=1 00:08:53.673 --rc genhtml_legend=1 00:08:53.673 --rc geninfo_all_blocks=1 00:08:53.673 --rc geninfo_unexecuted_blocks=1 00:08:53.673 00:08:53.673 ' 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:53.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.673 --rc genhtml_branch_coverage=1 00:08:53.673 --rc genhtml_function_coverage=1 00:08:53.673 --rc genhtml_legend=1 00:08:53.673 --rc geninfo_all_blocks=1 00:08:53.673 --rc geninfo_unexecuted_blocks=1 00:08:53.673 00:08:53.673 ' 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.673 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.674 12:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:58.940 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:58.940 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:58.940 Found net devices under 0000:86:00.0: cvl_0_0 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:58.940 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:58.941 Found net devices under 0000:86:00.1: cvl_0_1 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:58.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:08:58.941 00:08:58.941 --- 10.0.0.2 ping statistics --- 00:08:58.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.941 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:08:58.941 00:08:58.941 --- 10.0.0.1 ping statistics --- 00:08:58.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.941 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1842998 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1842998 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1842998 ']' 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.941 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:58.941 [2024-11-29 12:52:58.727501] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:08:58.941 [2024-11-29 12:52:58.727548] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.199 [2024-11-29 12:52:58.793530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:59.199 [2024-11-29 12:52:58.836504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.199 [2024-11-29 12:52:58.836540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.199 [2024-11-29 12:52:58.836547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.199 [2024-11-29 12:52:58.836553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.199 [2024-11-29 12:52:58.836558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.199 [2024-11-29 12:52:58.837913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.199 [2024-11-29 12:52:58.838019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.199 [2024-11-29 12:52:58.838022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.199 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.199 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:59.199 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:59.199 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:59.199 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:59.199 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.200 12:52:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:59.458 [2024-11-29 12:52:59.141047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.458 12:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.717 12:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:59.717 12:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.977 12:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:59.977 12:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:59.977 12:52:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:00.236 12:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2f9f823d-68db-4185-93c9-b7399a6b1573 00:09:00.236 12:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2f9f823d-68db-4185-93c9-b7399a6b1573 lvol 20 00:09:00.495 12:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=28f862fe-bf82-4b96-ba30-fd9ea61eda61 00:09:00.495 12:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:00.753 12:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 28f862fe-bf82-4b96-ba30-fd9ea61eda61 00:09:01.011 12:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:01.011 [2024-11-29 12:53:00.790039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.011 12:53:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:01.270 12:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1843335 00:09:01.270 12:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:01.270 12:53:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:02.648 12:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 28f862fe-bf82-4b96-ba30-fd9ea61eda61 MY_SNAPSHOT 00:09:02.648 12:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c32a9aea-3f7a-4ec1-80a0-3addb4543464 00:09:02.648 12:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 28f862fe-bf82-4b96-ba30-fd9ea61eda61 30 00:09:02.906 12:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c32a9aea-3f7a-4ec1-80a0-3addb4543464 MY_CLONE 00:09:03.165 12:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0a9f8f08-acf4-4b2d-8b4f-5a83d17e2c41 00:09:03.165 12:53:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0a9f8f08-acf4-4b2d-8b4f-5a83d17e2c41 00:09:03.732 12:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1843335 00:09:11.849 Initializing NVMe Controllers 00:09:11.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:11.849 Controller IO queue size 128, less than required. 00:09:11.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:11.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:11.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:11.849 Initialization complete. Launching workers. 00:09:11.849 ======================================================== 00:09:11.849 Latency(us) 00:09:11.849 Device Information : IOPS MiB/s Average min max 00:09:11.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12011.80 46.92 10660.91 1543.45 60842.37 00:09:11.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11928.00 46.59 10730.43 3630.13 53723.02 00:09:11.849 ======================================================== 00:09:11.849 Total : 23939.80 93.51 10695.55 1543.45 60842.37 00:09:11.849 00:09:11.849 12:53:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:12.108 12:53:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 28f862fe-bf82-4b96-ba30-fd9ea61eda61 00:09:12.108 12:53:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2f9f823d-68db-4185-93c9-b7399a6b1573 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:12.367 rmmod nvme_tcp 00:09:12.367 rmmod nvme_fabrics 00:09:12.367 rmmod nvme_keyring 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1842998 ']' 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1842998 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1842998 ']' 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1842998 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.367 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1842998 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1842998' 00:09:12.626 killing process with pid 1842998 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1842998 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1842998 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.626 12:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:15.162 00:09:15.162 real 0m21.239s 00:09:15.162 user 1m2.959s 00:09:15.162 sys 0m7.245s 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:15.162 ************************************ 00:09:15.162 END TEST nvmf_lvol 00:09:15.162 ************************************ 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.162 ************************************ 00:09:15.162 START TEST nvmf_lvs_grow 00:09:15.162 ************************************ 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:15.162 * Looking for test storage... 00:09:15.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:15.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.162 --rc genhtml_branch_coverage=1 00:09:15.162 --rc genhtml_function_coverage=1 00:09:15.162 --rc genhtml_legend=1 00:09:15.162 --rc geninfo_all_blocks=1 00:09:15.162 --rc geninfo_unexecuted_blocks=1 00:09:15.162 00:09:15.162 ' 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:15.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.162 --rc genhtml_branch_coverage=1 00:09:15.162 --rc genhtml_function_coverage=1 00:09:15.162 --rc genhtml_legend=1 00:09:15.162 --rc geninfo_all_blocks=1 00:09:15.162 --rc geninfo_unexecuted_blocks=1 00:09:15.162 00:09:15.162 ' 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:15.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.162 --rc genhtml_branch_coverage=1 00:09:15.162 --rc genhtml_function_coverage=1 00:09:15.162 --rc genhtml_legend=1 00:09:15.162 --rc geninfo_all_blocks=1 00:09:15.162 --rc geninfo_unexecuted_blocks=1 00:09:15.162 00:09:15.162 ' 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:15.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.162 --rc genhtml_branch_coverage=1 00:09:15.162 --rc genhtml_function_coverage=1 00:09:15.162 --rc genhtml_legend=1 00:09:15.162 --rc geninfo_all_blocks=1 00:09:15.162 --rc geninfo_unexecuted_blocks=1 00:09:15.162 00:09:15.162 ' 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.162 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:15.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:15.163 12:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:20.432 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:20.432 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:20.432 Found net devices under 0000:86:00.0: cvl_0_0 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:20.432 Found net devices under 0000:86:00.1: cvl_0_1 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:20.432 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:20.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:09:20.433 00:09:20.433 --- 10.0.0.2 ping statistics --- 00:09:20.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.433 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:09:20.433 00:09:20.433 --- 10.0.0.1 ping statistics --- 00:09:20.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.433 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1848703 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1848703 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1848703 ']' 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:20.433 [2024-11-29 12:53:19.762489] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:09:20.433 [2024-11-29 12:53:19.762538] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.433 [2024-11-29 12:53:19.829734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.433 [2024-11-29 12:53:19.870771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.433 [2024-11-29 12:53:19.870808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.433 [2024-11-29 12:53:19.870815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.433 [2024-11-29 12:53:19.870821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.433 [2024-11-29 12:53:19.870826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.433 [2024-11-29 12:53:19.871369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.433 12:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:20.433 [2024-11-29 12:53:20.171581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:20.433 ************************************ 00:09:20.433 START TEST lvs_grow_clean 00:09:20.433 ************************************ 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:20.433 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:20.692 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:20.692 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:20.951 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3a2b31cb-9eaa-4494-abcd-d999d01170a3 00:09:20.951 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:20.951 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2b31cb-9eaa-4494-abcd-d999d01170a3 00:09:21.210 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:21.210 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:21.210 12:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3a2b31cb-9eaa-4494-abcd-d999d01170a3 lvol 150 00:09:21.210 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=cf176f12-7919-44fe-994e-8b8d0b9d4f52 00:09:21.211 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:21.211 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:21.470 [2024-11-29 12:53:21.204539] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:21.470 [2024-11-29 12:53:21.204591] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:21.470 true 00:09:21.470 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2b31cb-9eaa-4494-abcd-d999d01170a3 00:09:21.470 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:21.729 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:21.729 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:21.988 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cf176f12-7919-44fe-994e-8b8d0b9d4f52 00:09:21.988 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:22.247 [2024-11-29 12:53:21.966848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.247 12:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:22.567 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1849163 00:09:22.567 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:22.567 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1849163 /var/tmp/bdevperf.sock 00:09:22.567 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1849163 ']' 00:09:22.567 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:22.567 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.568 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:22.568 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:22.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:22.568 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.568 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:22.568 [2024-11-29 12:53:22.217652] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:09:22.568 [2024-11-29 12:53:22.217703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849163 ] 00:09:22.568 [2024-11-29 12:53:22.278970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.568 [2024-11-29 12:53:22.321896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.867 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.867 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:22.867 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:23.201 Nvme0n1 00:09:23.201 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:23.201 [ 00:09:23.201 { 00:09:23.201 "name": "Nvme0n1", 00:09:23.201 "aliases": [ 00:09:23.201 "cf176f12-7919-44fe-994e-8b8d0b9d4f52" 00:09:23.201 ], 00:09:23.201 "product_name": "NVMe disk", 00:09:23.201 "block_size": 4096, 00:09:23.201 "num_blocks": 38912, 00:09:23.201 "uuid": "cf176f12-7919-44fe-994e-8b8d0b9d4f52", 00:09:23.201 "numa_id": 1, 00:09:23.201 "assigned_rate_limits": { 00:09:23.201 "rw_ios_per_sec": 0, 00:09:23.201 "rw_mbytes_per_sec": 0, 00:09:23.201 "r_mbytes_per_sec": 0, 00:09:23.201 "w_mbytes_per_sec": 0 00:09:23.201 }, 00:09:23.201 "claimed": false, 00:09:23.201 "zoned": false, 00:09:23.201 "supported_io_types": { 00:09:23.201 "read": true, 00:09:23.201 "write": true, 00:09:23.201 "unmap": true, 00:09:23.201 "flush": true, 00:09:23.201 "reset": true, 00:09:23.201 "nvme_admin": true, 00:09:23.201 "nvme_io": true, 00:09:23.201 "nvme_io_md": false, 00:09:23.201 "write_zeroes": true, 00:09:23.201 "zcopy": false, 00:09:23.201 "get_zone_info": false, 00:09:23.201 "zone_management": false, 00:09:23.201 "zone_append": false, 00:09:23.201 "compare": true, 00:09:23.201 "compare_and_write": true, 00:09:23.201 "abort": true, 00:09:23.201 "seek_hole": false, 00:09:23.201 "seek_data": false, 00:09:23.201 "copy": true, 00:09:23.201 "nvme_iov_md": false 00:09:23.201 }, 00:09:23.201 "memory_domains": [ 00:09:23.201 { 00:09:23.201 "dma_device_id": "system", 00:09:23.201 "dma_device_type": 1 00:09:23.201 } 00:09:23.201 ], 00:09:23.201 "driver_specific": { 00:09:23.201 "nvme": [ 00:09:23.201 { 00:09:23.201 "trid": { 00:09:23.201 "trtype": "TCP", 00:09:23.201 "adrfam": "IPv4", 00:09:23.201 "traddr": "10.0.0.2", 00:09:23.201 "trsvcid": "4420", 00:09:23.201 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:23.201 }, 00:09:23.201 "ctrlr_data": { 00:09:23.201 "cntlid": 1, 00:09:23.201 "vendor_id": "0x8086", 00:09:23.201 "model_number": "SPDK bdev Controller", 00:09:23.201 "serial_number": "SPDK0", 00:09:23.201 "firmware_revision": "25.01", 00:09:23.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:23.201 "oacs": { 00:09:23.201 "security": 0, 00:09:23.201 "format": 0, 00:09:23.201 "firmware": 0, 00:09:23.201 "ns_manage": 0 00:09:23.201 }, 00:09:23.201 "multi_ctrlr": true, 00:09:23.201 "ana_reporting": false 00:09:23.201 }, 00:09:23.201 "vs": { 00:09:23.201 "nvme_version": "1.3" 00:09:23.201 }, 00:09:23.201 "ns_data": { 00:09:23.201 "id": 1, 00:09:23.201 "can_share": true 00:09:23.201 } 00:09:23.201 } 00:09:23.201 ], 00:09:23.201 "mp_policy": "active_passive" 00:09:23.201 } 00:09:23.201 } 00:09:23.201 ] 00:09:23.201 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1849224 00:09:23.201 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:23.201 12:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:23.483 Running I/O for 10 seconds... 00:09:24.434 Latency(us) 00:09:24.434 [2024-11-29T11:53:24.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.434 Nvme0n1 : 1.00 21726.00 84.87 0.00 0.00 0.00 0.00 0.00 00:09:24.434 [2024-11-29T11:53:24.254Z] =================================================================================================================== 00:09:24.434 [2024-11-29T11:53:24.254Z] Total : 21726.00 84.87 0.00 0.00 0.00 0.00 0.00 00:09:24.434 00:09:25.369 12:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3a2b31cb-9eaa-4494-abcd-d999d01170a3 00:09:25.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.369 Nvme0n1 : 2.00 21815.00 85.21 0.00 0.00 0.00 0.00 0.00 00:09:25.369 [2024-11-29T11:53:25.189Z] =================================================================================================================== 00:09:25.369 [2024-11-29T11:53:25.189Z] Total : 21815.00 85.21 0.00 0.00 0.00 0.00 0.00 00:09:25.369 00:09:25.369 true 00:09:25.369 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2b31cb-9eaa-4494-abcd-d999d01170a3 00:09:25.369 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:25.627 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:25.627 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:25.627 12:53:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1849224 00:09:26.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.568 Nvme0n1 : 3.00 21836.67 85.30 0.00 0.00 0.00 0.00 0.00 00:09:26.568 [2024-11-29T11:53:26.388Z] =================================================================================================================== 00:09:26.568 [2024-11-29T11:53:26.388Z] Total : 21836.67 85.30 0.00 0.00 0.00 0.00 0.00 00:09:26.568 00:09:27.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.503 Nvme0n1 : 4.00 21903.50 85.56 0.00 0.00 0.00 0.00 0.00 00:09:27.503 [2024-11-29T11:53:27.323Z] =================================================================================================================== 00:09:27.503 [2024-11-29T11:53:27.323Z] Total : 21903.50 85.56 0.00 0.00 0.00 0.00 0.00 00:09:27.503 00:09:28.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.440 Nvme0n1 : 5.00 21943.60 85.72 0.00 0.00 0.00 0.00 0.00 00:09:28.440 [2024-11-29T11:53:28.260Z] =================================================================================================================== 00:09:28.440 [2024-11-29T11:53:28.260Z] Total : 21943.60 85.72 0.00 0.00 0.00 0.00 0.00 00:09:28.440 00:09:29.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.376 Nvme0n1 : 6.00 21973.00 85.83 0.00 0.00 0.00 0.00 0.00 00:09:29.376 [2024-11-29T11:53:29.196Z] =================================================================================================================== 00:09:29.376 [2024-11-29T11:53:29.196Z] Total : 21973.00 85.83 0.00 0.00 0.00 0.00 0.00 00:09:29.376 00:09:30.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.312 Nvme0n1 : 7.00 22000.86 85.94 0.00 0.00 0.00 0.00 0.00 00:09:30.312 [2024-11-29T11:53:30.132Z] =================================================================================================================== 00:09:30.312 [2024-11-29T11:53:30.132Z] Total : 22000.86 85.94 0.00 0.00 0.00 0.00 0.00 00:09:30.312 00:09:31.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.688 Nvme0n1 : 8.00 21980.75 85.86 0.00 0.00 0.00 0.00 0.00 00:09:31.688 [2024-11-29T11:53:31.508Z] =================================================================================================================== 00:09:31.688 [2024-11-29T11:53:31.508Z] Total : 21980.75 85.86 0.00 0.00 0.00 0.00 0.00 00:09:31.688 00:09:32.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.625 Nvme0n1 : 9.00 22005.11 85.96 0.00 0.00 0.00 0.00 0.00 00:09:32.625 [2024-11-29T11:53:32.445Z] =================================================================================================================== 00:09:32.625 [2024-11-29T11:53:32.445Z] Total : 22005.11 85.96 0.00 0.00 0.00 0.00 0.00 00:09:32.625 00:09:33.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.561 Nvme0n1 : 10.00 22028.60 86.05 0.00 0.00 0.00 0.00 0.00 00:09:33.561 [2024-11-29T11:53:33.381Z] =================================================================================================================== 00:09:33.561 [2024-11-29T11:53:33.381Z] Total : 22028.60 86.05 0.00 0.00 0.00 0.00 0.00 00:09:33.561 00:09:33.561 00:09:33.561 Latency(us) 00:09:33.561 [2024-11-29T11:53:33.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.561 Nvme0n1 : 10.01 22029.05 86.05 0.00 0.00 5806.31 1552.92 7465.41 00:09:33.561 [2024-11-29T11:53:33.381Z] =================================================================================================================== 00:09:33.561 [2024-11-29T11:53:33.381Z] Total : 22029.05 86.05 0.00 0.00 5806.31 1552.92 7465.41 00:09:33.561 { 00:09:33.561 "results": [ 00:09:33.561 { 00:09:33.561 "job": "Nvme0n1", 00:09:33.561 "core_mask": "0x2", 00:09:33.561 "workload": "randwrite", 00:09:33.561 "status": "finished", 00:09:33.561 "queue_depth": 128, 00:09:33.561 "io_size": 4096, 00:09:33.561 "runtime": 10.005607, 00:09:33.561 "iops": 22029.048312611118, 00:09:33.561 "mibps": 86.05096997113718, 00:09:33.561 "io_failed": 0, 00:09:33.561 "io_timeout": 0, 00:09:33.561 "avg_latency_us": 5806.311663971475, 00:09:33.561 "min_latency_us": 1552.9182608695653, 00:09:33.561 "max_latency_us": 7465.405217391304 00:09:33.561 } 00:09:33.561 ], 00:09:33.561 "core_count": 1 00:09:33.561 } 00:09:33.561 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1849163 00:09:33.561 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1849163 ']' 00:09:33.561 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1849163 00:09:33.561 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:33.561 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.561 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1849163 00:09:33.561 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:33.561 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:33.561 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1849163' 00:09:33.561 killing process with pid 1849163 00:09:33.561 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1849163 00:09:33.561 Received shutdown signal, test time was about 10.000000 seconds 00:09:33.561 00:09:33.561 Latency(us) 00:09:33.561 [2024-11-29T11:53:33.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.561 [2024-11-29T11:53:33.381Z] =================================================================================================================== 00:09:33.561 [2024-11-29T11:53:33.381Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:33.561 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1849163 00:09:33.561 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:33.820 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:34.079 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2b31cb-9eaa-4494-abcd-d999d01170a3 00:09:34.079 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:34.337 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:34.337 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:34.337 12:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:34.337 [2024-11-29 12:53:34.099448] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:34.337 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2b31cb-9eaa-4494-abcd-d999d01170a3 00:09:34.337 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:34.337 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2b31cb-9eaa-4494-abcd-d999d01170a3 00:09:34.337 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.337 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.338 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.338 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.338 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.338 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.338 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.338 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:34.338 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2b31cb-9eaa-4494-abcd-d999d01170a3 00:09:34.596 request: 00:09:34.596 { 00:09:34.596 "uuid": "3a2b31cb-9eaa-4494-abcd-d999d01170a3", 00:09:34.596 "method": "bdev_lvol_get_lvstores", 00:09:34.596 "req_id": 1 00:09:34.596 } 00:09:34.596 Got JSON-RPC error response 00:09:34.596 response: 00:09:34.596 { 00:09:34.596 "code": -19, 00:09:34.596 "message": "No such device" 00:09:34.596 } 00:09:34.596 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:34.596 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.596 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:34.596 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.596 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:34.854 aio_bdev 00:09:34.854 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cf176f12-7919-44fe-994e-8b8d0b9d4f52 00:09:34.854 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=cf176f12-7919-44fe-994e-8b8d0b9d4f52 00:09:34.854 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.854 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:34.854 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.854 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.854 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:35.114 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cf176f12-7919-44fe-994e-8b8d0b9d4f52 -t 2000 00:09:35.114 [ 00:09:35.114 { 00:09:35.114 "name": "cf176f12-7919-44fe-994e-8b8d0b9d4f52", 00:09:35.114 "aliases": [ 00:09:35.114 "lvs/lvol" 00:09:35.114 ], 00:09:35.114 "product_name": "Logical Volume", 00:09:35.114 "block_size": 4096, 00:09:35.114 "num_blocks": 38912, 00:09:35.114 "uuid": "cf176f12-7919-44fe-994e-8b8d0b9d4f52", 00:09:35.114 "assigned_rate_limits": { 00:09:35.114 "rw_ios_per_sec": 0, 00:09:35.114 "rw_mbytes_per_sec": 0, 00:09:35.114 "r_mbytes_per_sec": 0, 00:09:35.114 "w_mbytes_per_sec": 0 00:09:35.114 }, 00:09:35.114 "claimed": false, 00:09:35.114 "zoned": false, 00:09:35.114 "supported_io_types": { 00:09:35.114 "read": true, 00:09:35.114 "write": true, 00:09:35.114 "unmap": true, 00:09:35.114 "flush": false, 00:09:35.114 "reset": true, 00:09:35.114 "nvme_admin": false, 00:09:35.114 "nvme_io": false, 00:09:35.114 "nvme_io_md": false, 00:09:35.114 "write_zeroes": true, 00:09:35.114 "zcopy": false, 00:09:35.114 "get_zone_info": false, 00:09:35.114 "zone_management": false, 00:09:35.114 "zone_append": false, 00:09:35.114 "compare": false, 00:09:35.114 "compare_and_write": false, 00:09:35.114 "abort": false, 00:09:35.114 "seek_hole": true, 00:09:35.114 "seek_data": true, 00:09:35.114 "copy": false, 00:09:35.114 "nvme_iov_md": false 00:09:35.114 }, 00:09:35.114 "driver_specific": { 00:09:35.114 "lvol": { 00:09:35.114 "lvol_store_uuid": "3a2b31cb-9eaa-4494-abcd-d999d01170a3", 00:09:35.114 "base_bdev": "aio_bdev", 00:09:35.114 "thin_provision": false, 00:09:35.114 "num_allocated_clusters": 38, 00:09:35.114 "snapshot": false, 00:09:35.114 "clone": false, 00:09:35.114 "esnap_clone": false 00:09:35.114 } 00:09:35.114 } 00:09:35.114 } 00:09:35.114 ] 00:09:35.114 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:35.114 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2b31cb-9eaa-4494-abcd-d999d01170a3 00:09:35.114 12:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:35.373 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:35.373 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3a2b31cb-9eaa-4494-abcd-d999d01170a3 00:09:35.373 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:35.631 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:35.631 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cf176f12-7919-44fe-994e-8b8d0b9d4f52 00:09:35.889 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a2b31cb-9eaa-4494-abcd-d999d01170a3 00:09:35.889 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:36.147 00:09:36.147 real 0m15.674s 00:09:36.147 user 0m15.162s 00:09:36.147 sys 0m1.528s 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:36.147 ************************************ 00:09:36.147 END TEST lvs_grow_clean 00:09:36.147 ************************************ 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:36.147 ************************************ 00:09:36.147 START TEST lvs_grow_dirty 00:09:36.147 ************************************ 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:36.147 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:36.406 12:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:36.406 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:36.406 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:36.664 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=99621239-194f-49b8-acef-e8c94845b1ff 00:09:36.664 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99621239-194f-49b8-acef-e8c94845b1ff 00:09:36.664 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:36.923 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:36.923 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:36.923 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 99621239-194f-49b8-acef-e8c94845b1ff lvol 150 00:09:37.183 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=939434c8-f25f-462f-8f9d-eb1979f764c1 00:09:37.183 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:37.183 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:37.183 [2024-11-29 12:53:36.927677] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:37.183 [2024-11-29 12:53:36.927737] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:37.183 true 00:09:37.183 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99621239-194f-49b8-acef-e8c94845b1ff 00:09:37.183 12:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:37.442 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:37.442 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:37.701 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 939434c8-f25f-462f-8f9d-eb1979f764c1 00:09:37.701 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:37.960 [2024-11-29 12:53:37.685940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.960 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:38.219 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1851822 00:09:38.219 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:38.219 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1851822 /var/tmp/bdevperf.sock 00:09:38.219 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1851822 ']' 00:09:38.219 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:38.219 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.220 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:38.220 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:38.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:38.220 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.220 12:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:38.220 [2024-11-29 12:53:37.926215] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:09:38.220 [2024-11-29 12:53:37.926262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851822 ] 00:09:38.220 [2024-11-29 12:53:37.986783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.220 [2024-11-29 12:53:38.026977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.481 12:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.481 12:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:38.481 12:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:38.739 Nvme0n1 00:09:38.739 12:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:38.998 [ 00:09:38.998 { 00:09:38.998 "name": "Nvme0n1", 00:09:38.998 "aliases": [ 00:09:38.998 "939434c8-f25f-462f-8f9d-eb1979f764c1" 00:09:38.998 ], 00:09:38.998 "product_name": "NVMe disk", 00:09:38.998 "block_size": 4096, 00:09:38.998 "num_blocks": 38912, 00:09:38.998 "uuid": "939434c8-f25f-462f-8f9d-eb1979f764c1", 00:09:38.998 "numa_id": 1, 00:09:38.998 "assigned_rate_limits": { 00:09:38.998 "rw_ios_per_sec": 0, 00:09:38.998 "rw_mbytes_per_sec": 0, 00:09:38.998 "r_mbytes_per_sec": 0, 00:09:38.998 "w_mbytes_per_sec": 0 00:09:38.998 }, 00:09:38.998 "claimed": false, 00:09:38.998 "zoned": false, 00:09:38.998 "supported_io_types": { 00:09:38.998 "read": true, 00:09:38.998 "write": true, 00:09:38.998 "unmap": true, 00:09:38.998 "flush": true, 00:09:38.998 "reset": true, 00:09:38.998 "nvme_admin": true, 00:09:38.998 "nvme_io": true, 00:09:38.998 "nvme_io_md": false, 00:09:38.998 "write_zeroes": true, 00:09:38.998 "zcopy": false, 00:09:38.998 "get_zone_info": false, 00:09:38.998 "zone_management": false, 00:09:38.998 "zone_append": false, 00:09:38.998 "compare": true, 00:09:38.998 "compare_and_write": true, 00:09:38.998 "abort": true, 00:09:38.998 "seek_hole": false, 00:09:38.998 "seek_data": false, 00:09:38.998 "copy": true, 00:09:38.998 "nvme_iov_md": false 00:09:38.998 }, 00:09:38.999 "memory_domains": [ 00:09:38.999 { 00:09:38.999 "dma_device_id": "system", 00:09:38.999 "dma_device_type": 1 00:09:38.999 } 00:09:38.999 ], 00:09:38.999 "driver_specific": { 00:09:38.999 "nvme": [ 00:09:38.999 { 00:09:38.999 "trid": { 00:09:38.999 "trtype": "TCP", 00:09:38.999 "adrfam": "IPv4", 00:09:38.999 "traddr": "10.0.0.2", 00:09:38.999 "trsvcid": "4420", 00:09:38.999 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:38.999 }, 00:09:38.999 "ctrlr_data": { 00:09:38.999 "cntlid": 1, 00:09:38.999 "vendor_id": "0x8086", 00:09:38.999 "model_number": "SPDK bdev Controller", 00:09:38.999 "serial_number": "SPDK0", 00:09:38.999 "firmware_revision": "25.01", 00:09:38.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:38.999 "oacs": { 00:09:38.999 "security": 0, 00:09:38.999 "format": 0, 00:09:38.999 "firmware": 0, 00:09:38.999 "ns_manage": 0 00:09:38.999 }, 00:09:38.999 "multi_ctrlr": true, 00:09:38.999 "ana_reporting": false 00:09:38.999 }, 00:09:38.999 "vs": { 00:09:38.999 "nvme_version": "1.3" 00:09:38.999 }, 00:09:38.999 "ns_data": { 00:09:38.999 "id": 1, 00:09:38.999 "can_share": true 00:09:38.999 } 00:09:38.999 } 00:09:38.999 ], 00:09:38.999 "mp_policy": "active_passive" 00:09:38.999 } 00:09:38.999 } 00:09:38.999 ] 00:09:38.999 12:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1851930 00:09:38.999 12:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:38.999 12:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:38.999 Running I/O for 10 seconds... 00:09:39.935 Latency(us) 00:09:39.935 [2024-11-29T11:53:39.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.935 Nvme0n1 : 1.00 22624.00 88.38 0.00 0.00 0.00 0.00 0.00 00:09:39.935 [2024-11-29T11:53:39.755Z] =================================================================================================================== 00:09:39.935 [2024-11-29T11:53:39.755Z] Total : 22624.00 88.38 0.00 0.00 0.00 0.00 0.00 00:09:39.935 00:09:40.873 12:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 99621239-194f-49b8-acef-e8c94845b1ff 00:09:41.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.132 Nvme0n1 : 2.00 22527.00 88.00 0.00 0.00 0.00 0.00 0.00 00:09:41.132 [2024-11-29T11:53:40.952Z] =================================================================================================================== 00:09:41.132 [2024-11-29T11:53:40.952Z] Total : 22527.00 88.00 0.00 0.00 0.00 0.00 0.00 00:09:41.132 00:09:41.132 true 00:09:41.132 12:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99621239-194f-49b8-acef-e8c94845b1ff 00:09:41.132 12:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:41.390 12:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:41.390 12:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:41.390 12:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1851930 00:09:41.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.957 Nvme0n1 : 3.00 22619.67 88.36 0.00 0.00 0.00 0.00 0.00 00:09:41.957 [2024-11-29T11:53:41.777Z] =================================================================================================================== 00:09:41.957 [2024-11-29T11:53:41.777Z] Total : 22619.67 88.36 0.00 0.00 0.00 0.00 0.00 00:09:41.957 00:09:43.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.333 Nvme0n1 : 4.00 22717.50 88.74 0.00 0.00 0.00 0.00 0.00 00:09:43.333 [2024-11-29T11:53:43.153Z] =================================================================================================================== 00:09:43.333 [2024-11-29T11:53:43.153Z] Total : 22717.50 88.74 0.00 0.00 0.00 0.00 0.00 00:09:43.333 00:09:44.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.268 Nvme0n1 : 5.00 22776.60 88.97 0.00 0.00 0.00 0.00 0.00 00:09:44.268 [2024-11-29T11:53:44.088Z] =================================================================================================================== 00:09:44.268 [2024-11-29T11:53:44.088Z] Total : 22776.60 88.97 0.00 0.00 0.00 0.00 0.00 00:09:44.268 00:09:45.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.204 Nvme0n1 : 6.00 22834.17 89.20 0.00 0.00 0.00 0.00 0.00 00:09:45.204 [2024-11-29T11:53:45.024Z] =================================================================================================================== 00:09:45.204 [2024-11-29T11:53:45.024Z] Total : 22834.17 89.20 0.00 0.00 0.00 0.00 0.00 00:09:45.204 00:09:46.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.145 Nvme0n1 : 7.00 22865.71 89.32 0.00 0.00 0.00 0.00 0.00 00:09:46.145 [2024-11-29T11:53:45.965Z] =================================================================================================================== 00:09:46.145 [2024-11-29T11:53:45.965Z] Total : 22865.71 89.32 0.00 0.00 0.00 0.00 0.00 00:09:46.145 00:09:47.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.080 Nvme0n1 : 8.00 22905.25 89.47 0.00 0.00 0.00 0.00 0.00 00:09:47.080 [2024-11-29T11:53:46.900Z] =================================================================================================================== 00:09:47.080 [2024-11-29T11:53:46.900Z] Total : 22905.25 89.47 0.00 0.00 0.00 0.00 0.00 00:09:47.080 00:09:48.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.016 Nvme0n1 : 9.00 22928.67 89.57 0.00 0.00 0.00 0.00 0.00 00:09:48.016 [2024-11-29T11:53:47.836Z] =================================================================================================================== 00:09:48.016 [2024-11-29T11:53:47.836Z] Total : 22928.67 89.57 0.00 0.00 0.00 0.00 0.00 00:09:48.016 00:09:48.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.954 Nvme0n1 : 10.00 22953.60 89.66 0.00 0.00 0.00 0.00 0.00 00:09:48.954 [2024-11-29T11:53:48.774Z] =================================================================================================================== 00:09:48.954 [2024-11-29T11:53:48.774Z] Total : 22953.60 89.66 0.00 0.00 0.00 0.00 0.00 00:09:48.954 00:09:48.954 00:09:48.954 Latency(us) 00:09:48.954 [2024-11-29T11:53:48.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.954 Nvme0n1 : 10.01 22954.38 89.67 0.00 0.00 5573.22 1937.59 11055.64 00:09:48.954 [2024-11-29T11:53:48.774Z] =================================================================================================================== 00:09:48.954 [2024-11-29T11:53:48.774Z] Total : 22954.38 89.67 0.00 0.00 5573.22 1937.59 11055.64 00:09:48.954 { 00:09:48.954 "results": [ 00:09:48.954 { 00:09:48.954 "job": "Nvme0n1", 00:09:48.954 "core_mask": "0x2", 00:09:48.954 "workload": "randwrite", 00:09:48.954 "status": "finished", 00:09:48.954 "queue_depth": 128, 00:09:48.954 "io_size": 4096, 00:09:48.954 "runtime": 10.005235, 00:09:48.954 "iops": 22954.383380300413, 00:09:48.954 "mibps": 89.66556007929849, 00:09:48.954 "io_failed": 0, 00:09:48.954 "io_timeout": 0, 00:09:48.954 "avg_latency_us": 5573.222464515269, 00:09:48.954 "min_latency_us": 1937.5860869565217, 00:09:48.954 "max_latency_us": 11055.638260869566 00:09:48.954 } 00:09:48.954 ], 00:09:48.954 "core_count": 1 00:09:48.954 } 00:09:48.954 12:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1851822 00:09:48.954 12:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1851822 ']' 00:09:48.954 12:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1851822 00:09:48.954 12:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:49.214 12:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.214 12:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1851822 00:09:49.214 12:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:49.214 12:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:49.214 12:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1851822' 00:09:49.214 killing process with pid 1851822 00:09:49.214 12:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1851822 00:09:49.214 Received shutdown signal, test time was about 10.000000 seconds 00:09:49.214 00:09:49.214 Latency(us) 00:09:49.214 [2024-11-29T11:53:49.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.214 [2024-11-29T11:53:49.034Z] =================================================================================================================== 00:09:49.214 [2024-11-29T11:53:49.034Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:49.214 12:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1851822 00:09:49.214 12:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:49.473 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:49.731 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99621239-194f-49b8-acef-e8c94845b1ff 00:09:49.731 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1848703 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1848703 00:09:49.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1848703 Killed "${NVMF_APP[@]}" "$@" 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1853703 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1853703 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1853703 ']' 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.991 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.991 [2024-11-29 12:53:49.653773] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:09:49.991 [2024-11-29 12:53:49.653823] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.991 [2024-11-29 12:53:49.720236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.991 [2024-11-29 12:53:49.761851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.991 [2024-11-29 12:53:49.761888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.991 [2024-11-29 12:53:49.761895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.991 [2024-11-29 12:53:49.761901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.991 [2024-11-29 12:53:49.761906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.991 [2024-11-29 12:53:49.762484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.250 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.250 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:50.250 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.250 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.250 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:50.250 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.250 12:53:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:50.250 [2024-11-29 12:53:50.066741] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:50.250 [2024-11-29 12:53:50.066832] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:50.250 [2024-11-29 12:53:50.066861] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:50.510 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:50.510 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 939434c8-f25f-462f-8f9d-eb1979f764c1 00:09:50.510 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=939434c8-f25f-462f-8f9d-eb1979f764c1 00:09:50.510 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.510 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:50.510 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.510 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.510 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:50.510 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 939434c8-f25f-462f-8f9d-eb1979f764c1 -t 2000 00:09:50.769 [ 00:09:50.769 { 00:09:50.769 "name": "939434c8-f25f-462f-8f9d-eb1979f764c1", 00:09:50.769 "aliases": [ 00:09:50.769 "lvs/lvol" 00:09:50.769 ], 00:09:50.769 "product_name": "Logical Volume", 00:09:50.769 "block_size": 4096, 00:09:50.769 "num_blocks": 38912, 00:09:50.769 "uuid": "939434c8-f25f-462f-8f9d-eb1979f764c1", 00:09:50.769 "assigned_rate_limits": { 00:09:50.769 "rw_ios_per_sec": 0, 00:09:50.769 "rw_mbytes_per_sec": 0, 00:09:50.769 "r_mbytes_per_sec": 0, 00:09:50.769 "w_mbytes_per_sec": 0 00:09:50.769 }, 00:09:50.769 "claimed": false, 00:09:50.769 "zoned": false, 00:09:50.769 "supported_io_types": { 00:09:50.769 "read": true, 00:09:50.769 "write": true, 00:09:50.769 "unmap": true, 00:09:50.769 "flush": false, 00:09:50.769 "reset": true, 00:09:50.769 "nvme_admin": false, 00:09:50.769 "nvme_io": false, 00:09:50.770 "nvme_io_md": false, 00:09:50.770 "write_zeroes": true, 00:09:50.770 "zcopy": false, 00:09:50.770 "get_zone_info": false, 00:09:50.770 "zone_management": false, 00:09:50.770 "zone_append": false, 00:09:50.770 "compare": false, 00:09:50.770 "compare_and_write": false, 00:09:50.770 "abort": false, 00:09:50.770 "seek_hole": true, 00:09:50.770 "seek_data": true, 00:09:50.770 "copy": false, 00:09:50.770 "nvme_iov_md": false 00:09:50.770 }, 00:09:50.770 "driver_specific": { 00:09:50.770 "lvol": { 00:09:50.770 "lvol_store_uuid": "99621239-194f-49b8-acef-e8c94845b1ff", 00:09:50.770 "base_bdev": "aio_bdev", 00:09:50.770 "thin_provision": false, 00:09:50.770 "num_allocated_clusters": 38, 00:09:50.770 "snapshot": false, 00:09:50.770 "clone": false, 00:09:50.770 "esnap_clone": false 00:09:50.770 } 00:09:50.770 } 00:09:50.770 } 00:09:50.770 ] 00:09:50.770 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:50.770 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99621239-194f-49b8-acef-e8c94845b1ff 00:09:50.770 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:51.029 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:51.029 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:51.029 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99621239-194f-49b8-acef-e8c94845b1ff 00:09:51.287 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:51.287 12:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:51.287 [2024-11-29 12:53:51.047584] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:51.287 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99621239-194f-49b8-acef-e8c94845b1ff 00:09:51.287 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:51.287 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99621239-194f-49b8-acef-e8c94845b1ff 00:09:51.287 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.287 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.287 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.287 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.288 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.288 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.288 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.288 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:51.288 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99621239-194f-49b8-acef-e8c94845b1ff 00:09:51.546 request: 00:09:51.546 { 00:09:51.546 "uuid": "99621239-194f-49b8-acef-e8c94845b1ff", 00:09:51.546 "method": "bdev_lvol_get_lvstores", 00:09:51.546 "req_id": 1 00:09:51.546 } 00:09:51.546 Got JSON-RPC error response 00:09:51.546 response: 00:09:51.546 { 00:09:51.546 "code": -19, 00:09:51.546 "message": "No such device" 00:09:51.546 } 00:09:51.546 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:51.546 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:51.546 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:51.546 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:51.546 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:51.805 aio_bdev 00:09:51.805 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 939434c8-f25f-462f-8f9d-eb1979f764c1 00:09:51.805 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=939434c8-f25f-462f-8f9d-eb1979f764c1 00:09:51.805 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.805 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:51.805 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.805 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.805 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:52.064 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 939434c8-f25f-462f-8f9d-eb1979f764c1 -t 2000 00:09:52.064 [ 00:09:52.064 { 00:09:52.064 "name": "939434c8-f25f-462f-8f9d-eb1979f764c1", 00:09:52.064 "aliases": [ 00:09:52.064 "lvs/lvol" 00:09:52.064 ], 00:09:52.064 "product_name": "Logical Volume", 00:09:52.064 "block_size": 4096, 00:09:52.064 "num_blocks": 38912, 00:09:52.064 "uuid": "939434c8-f25f-462f-8f9d-eb1979f764c1", 00:09:52.064 "assigned_rate_limits": { 00:09:52.064 "rw_ios_per_sec": 0, 00:09:52.064 "rw_mbytes_per_sec": 0, 00:09:52.064 "r_mbytes_per_sec": 0, 00:09:52.064 "w_mbytes_per_sec": 0 00:09:52.064 }, 00:09:52.064 "claimed": false, 00:09:52.064 "zoned": false, 00:09:52.064 "supported_io_types": { 00:09:52.064 "read": true, 00:09:52.064 "write": true, 00:09:52.064 "unmap": true, 00:09:52.064 "flush": false, 00:09:52.064 "reset": true, 00:09:52.064 "nvme_admin": false, 00:09:52.064 "nvme_io": false, 00:09:52.064 "nvme_io_md": false, 00:09:52.064 "write_zeroes": true, 00:09:52.064 "zcopy": false, 00:09:52.064 "get_zone_info": false, 00:09:52.064 "zone_management": false, 00:09:52.064 "zone_append": false, 00:09:52.064 "compare": false, 00:09:52.064 "compare_and_write": false, 00:09:52.064 "abort": false, 00:09:52.064 "seek_hole": true, 00:09:52.064 "seek_data": true, 00:09:52.064 "copy": false, 00:09:52.064 "nvme_iov_md": false 00:09:52.064 }, 00:09:52.064 "driver_specific": { 00:09:52.064 "lvol": { 00:09:52.064 "lvol_store_uuid": "99621239-194f-49b8-acef-e8c94845b1ff", 00:09:52.064 "base_bdev": "aio_bdev", 00:09:52.064 "thin_provision": false, 00:09:52.064 "num_allocated_clusters": 38, 00:09:52.064 "snapshot": false, 00:09:52.064 "clone": false, 00:09:52.064 "esnap_clone": false 00:09:52.064 } 00:09:52.064 } 00:09:52.064 } 00:09:52.064 ] 00:09:52.064 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:52.064 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99621239-194f-49b8-acef-e8c94845b1ff 00:09:52.064 12:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:52.324 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:52.324 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99621239-194f-49b8-acef-e8c94845b1ff 00:09:52.324 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:52.582 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:52.582 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 939434c8-f25f-462f-8f9d-eb1979f764c1 00:09:52.841 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99621239-194f-49b8-acef-e8c94845b1ff 00:09:52.841 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:53.100 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:53.100 00:09:53.100 real 0m16.883s 00:09:53.100 user 0m43.503s 00:09:53.100 sys 0m3.958s 00:09:53.100 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.100 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:53.100 ************************************ 00:09:53.100 END TEST lvs_grow_dirty 00:09:53.100 ************************************ 00:09:53.100 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:53.100 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:53.100 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:53.100 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:53.100 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:53.100 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:53.100 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:53.100 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:53.100 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:53.100 nvmf_trace.0 00:09:53.359 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:53.359 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:53.359 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.359 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:53.359 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.359 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:53.359 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.359 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.359 rmmod nvme_tcp 00:09:53.359 rmmod nvme_fabrics 00:09:53.359 rmmod nvme_keyring 00:09:53.359 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.359 12:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:53.359 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:53.359 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1853703 ']' 00:09:53.359 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1853703 00:09:53.359 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1853703 ']' 00:09:53.359 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1853703 00:09:53.359 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:53.359 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.359 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1853703 00:09:53.359 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.359 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.359 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1853703' 00:09:53.359 killing process with pid 1853703 00:09:53.359 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1853703 00:09:53.359 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1853703 00:09:53.618 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.618 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.618 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.618 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:53.618 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.618 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:53.618 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.618 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.618 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:53.618 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.618 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.618 12:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.530 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.530 00:09:55.530 real 0m40.713s 00:09:55.530 user 1m3.857s 00:09:55.530 sys 0m9.614s 00:09:55.530 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.530 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:55.530 ************************************ 00:09:55.530 END TEST nvmf_lvs_grow 00:09:55.530 ************************************ 00:09:55.530 12:53:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:55.530 12:53:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:55.530 12:53:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.530 12:53:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.530 ************************************ 00:09:55.530 START TEST nvmf_bdev_io_wait 00:09:55.530 ************************************ 00:09:55.530 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:55.787 * Looking for test storage... 00:09:55.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:55.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.788 --rc genhtml_branch_coverage=1 00:09:55.788 --rc genhtml_function_coverage=1 00:09:55.788 --rc genhtml_legend=1 00:09:55.788 --rc geninfo_all_blocks=1 00:09:55.788 --rc geninfo_unexecuted_blocks=1 00:09:55.788 00:09:55.788 ' 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:55.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.788 --rc genhtml_branch_coverage=1 00:09:55.788 --rc genhtml_function_coverage=1 00:09:55.788 --rc genhtml_legend=1 00:09:55.788 --rc geninfo_all_blocks=1 00:09:55.788 --rc geninfo_unexecuted_blocks=1 00:09:55.788 00:09:55.788 ' 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:55.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.788 --rc genhtml_branch_coverage=1 00:09:55.788 --rc genhtml_function_coverage=1 00:09:55.788 --rc genhtml_legend=1 00:09:55.788 --rc geninfo_all_blocks=1 00:09:55.788 --rc geninfo_unexecuted_blocks=1 00:09:55.788 00:09:55.788 ' 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:55.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.788 --rc genhtml_branch_coverage=1 00:09:55.788 --rc genhtml_function_coverage=1 00:09:55.788 --rc genhtml_legend=1 00:09:55.788 --rc geninfo_all_blocks=1 00:09:55.788 --rc geninfo_unexecuted_blocks=1 00:09:55.788 00:09:55.788 ' 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.788 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.789 12:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:01.070 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:01.070 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:01.070 Found net devices under 0000:86:00.0: cvl_0_0 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:01.070 Found net devices under 0000:86:00.1: cvl_0_1 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.070 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.328 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.328 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.328 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:01.328 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.328 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.328 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.328 12:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:01.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:10:01.328 00:10:01.328 --- 10.0.0.2 ping statistics --- 00:10:01.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.328 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:10:01.328 00:10:01.328 --- 10.0.0.1 ping statistics --- 00:10:01.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.328 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1858021 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1858021 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1858021 ']' 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.328 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.328 [2024-11-29 12:54:01.095366] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:10:01.328 [2024-11-29 12:54:01.095415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.587 [2024-11-29 12:54:01.159324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.587 [2024-11-29 12:54:01.203388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.587 [2024-11-29 12:54:01.203426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.587 [2024-11-29 12:54:01.203433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.587 [2024-11-29 12:54:01.203440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.587 [2024-11-29 12:54:01.203445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.587 [2024-11-29 12:54:01.204884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.587 [2024-11-29 12:54:01.206963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.587 [2024-11-29 12:54:01.206984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.587 [2024-11-29 12:54:01.206986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.587 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.587 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:01.587 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.587 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.587 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.587 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.587 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.588 [2024-11-29 12:54:01.375338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.588 Malloc0 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.588 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.847 [2024-11-29 12:54:01.430981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1858071 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1858073 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:01.847 { 00:10:01.847 "params": { 00:10:01.847 "name": "Nvme$subsystem", 00:10:01.847 "trtype": "$TEST_TRANSPORT", 00:10:01.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.847 "adrfam": "ipv4", 00:10:01.847 "trsvcid": "$NVMF_PORT", 00:10:01.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.847 "hdgst": ${hdgst:-false}, 00:10:01.847 "ddgst": ${ddgst:-false} 00:10:01.847 }, 00:10:01.847 "method": "bdev_nvme_attach_controller" 00:10:01.847 } 00:10:01.847 EOF 00:10:01.847 )") 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1858075 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:01.847 { 00:10:01.847 "params": { 00:10:01.847 "name": "Nvme$subsystem", 00:10:01.847 "trtype": "$TEST_TRANSPORT", 00:10:01.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.847 "adrfam": "ipv4", 00:10:01.847 "trsvcid": "$NVMF_PORT", 00:10:01.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.847 "hdgst": ${hdgst:-false}, 00:10:01.847 "ddgst": ${ddgst:-false} 00:10:01.847 }, 00:10:01.847 "method": "bdev_nvme_attach_controller" 00:10:01.847 } 00:10:01.847 EOF 00:10:01.847 )") 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1858078 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:01.847 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:01.847 { 00:10:01.847 "params": { 00:10:01.847 "name": "Nvme$subsystem", 00:10:01.847 "trtype": "$TEST_TRANSPORT", 00:10:01.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.847 "adrfam": "ipv4", 00:10:01.847 "trsvcid": "$NVMF_PORT", 00:10:01.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.848 "hdgst": ${hdgst:-false}, 00:10:01.848 "ddgst": ${ddgst:-false} 00:10:01.848 }, 00:10:01.848 "method": "bdev_nvme_attach_controller" 00:10:01.848 } 00:10:01.848 EOF 00:10:01.848 )") 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:01.848 { 00:10:01.848 "params": { 00:10:01.848 "name": "Nvme$subsystem", 00:10:01.848 "trtype": "$TEST_TRANSPORT", 00:10:01.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.848 "adrfam": "ipv4", 00:10:01.848 "trsvcid": "$NVMF_PORT", 00:10:01.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.848 "hdgst": ${hdgst:-false}, 00:10:01.848 "ddgst": ${ddgst:-false} 00:10:01.848 }, 00:10:01.848 "method": "bdev_nvme_attach_controller" 00:10:01.848 } 00:10:01.848 EOF 00:10:01.848 )") 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1858071 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:01.848 "params": { 00:10:01.848 "name": "Nvme1", 00:10:01.848 "trtype": "tcp", 00:10:01.848 "traddr": "10.0.0.2", 00:10:01.848 "adrfam": "ipv4", 00:10:01.848 "trsvcid": "4420", 00:10:01.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.848 "hdgst": false, 00:10:01.848 "ddgst": false 00:10:01.848 }, 00:10:01.848 "method": "bdev_nvme_attach_controller" 00:10:01.848 }' 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:01.848 "params": { 00:10:01.848 "name": "Nvme1", 00:10:01.848 "trtype": "tcp", 00:10:01.848 "traddr": "10.0.0.2", 00:10:01.848 "adrfam": "ipv4", 00:10:01.848 "trsvcid": "4420", 00:10:01.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.848 "hdgst": false, 00:10:01.848 "ddgst": false 00:10:01.848 }, 00:10:01.848 "method": "bdev_nvme_attach_controller" 00:10:01.848 }' 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:01.848 "params": { 00:10:01.848 "name": "Nvme1", 00:10:01.848 "trtype": "tcp", 00:10:01.848 "traddr": "10.0.0.2", 00:10:01.848 "adrfam": "ipv4", 00:10:01.848 "trsvcid": "4420", 00:10:01.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.848 "hdgst": false, 00:10:01.848 "ddgst": false 00:10:01.848 }, 00:10:01.848 "method": "bdev_nvme_attach_controller" 00:10:01.848 }' 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:01.848 12:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:01.848 "params": { 00:10:01.848 "name": "Nvme1", 00:10:01.848 "trtype": "tcp", 00:10:01.848 "traddr": "10.0.0.2", 00:10:01.848 "adrfam": "ipv4", 00:10:01.848 "trsvcid": "4420", 00:10:01.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.848 "hdgst": false, 00:10:01.848 "ddgst": false 00:10:01.848 }, 00:10:01.848 "method": "bdev_nvme_attach_controller" 00:10:01.848 }' 00:10:01.848 [2024-11-29 12:54:01.480207] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:10:01.848 [2024-11-29 12:54:01.480258] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:01.848 [2024-11-29 12:54:01.483753] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:10:01.848 [2024-11-29 12:54:01.483797] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:01.848 [2024-11-29 12:54:01.484905] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:10:01.848 [2024-11-29 12:54:01.484939] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:10:01.848 [2024-11-29 12:54:01.484945] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:01.848 [2024-11-29 12:54:01.484984] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:01.848 [2024-11-29 12:54:01.666449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.106 [2024-11-29 12:54:01.709652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:02.106 [2024-11-29 12:54:01.760116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.106 [2024-11-29 12:54:01.802992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:02.106 [2024-11-29 12:54:01.861401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.106 [2024-11-29 12:54:01.912636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:02.106 [2024-11-29 12:54:01.921050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.367 [2024-11-29 12:54:01.964163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:02.367 Running I/O for 1 seconds... 00:10:02.367 Running I/O for 1 seconds... 00:10:02.367 Running I/O for 1 seconds... 00:10:02.626 Running I/O for 1 seconds... 00:10:03.563 8270.00 IOPS, 32.30 MiB/s [2024-11-29T11:54:03.383Z] 237184.00 IOPS, 926.50 MiB/s 00:10:03.563 Latency(us) 00:10:03.563 [2024-11-29T11:54:03.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.563 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:03.563 Nvme1n1 : 1.00 236819.01 925.07 0.00 0.00 538.20 225.28 1531.55 00:10:03.563 [2024-11-29T11:54:03.383Z] =================================================================================================================== 00:10:03.563 [2024-11-29T11:54:03.383Z] Total : 236819.01 925.07 0.00 0.00 538.20 225.28 1531.55 00:10:03.563 00:10:03.563 Latency(us) 00:10:03.563 [2024-11-29T11:54:03.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.563 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:03.563 Nvme1n1 : 1.02 8304.82 32.44 0.00 0.00 15330.91 5385.35 19603.81 00:10:03.563 [2024-11-29T11:54:03.383Z] =================================================================================================================== 00:10:03.563 [2024-11-29T11:54:03.383Z] Total : 8304.82 32.44 0.00 0.00 15330.91 5385.35 19603.81 00:10:03.563 7526.00 IOPS, 29.40 MiB/s 00:10:03.563 Latency(us) 00:10:03.563 [2024-11-29T11:54:03.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.563 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:03.563 Nvme1n1 : 1.01 7627.11 29.79 0.00 0.00 16731.17 4843.97 31001.38 00:10:03.563 [2024-11-29T11:54:03.383Z] =================================================================================================================== 00:10:03.563 [2024-11-29T11:54:03.383Z] Total : 7627.11 29.79 0.00 0.00 16731.17 4843.97 31001.38 00:10:03.563 11082.00 IOPS, 43.29 MiB/s 00:10:03.563 Latency(us) 00:10:03.563 [2024-11-29T11:54:03.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.563 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:03.563 Nvme1n1 : 1.01 11144.98 43.54 0.00 0.00 11447.70 4900.95 19945.74 00:10:03.563 [2024-11-29T11:54:03.383Z] =================================================================================================================== 00:10:03.563 [2024-11-29T11:54:03.383Z] Total : 11144.98 43.54 0.00 0.00 11447.70 4900.95 19945.74 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1858073 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1858075 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1858078 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.563 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.563 rmmod nvme_tcp 00:10:03.563 rmmod nvme_fabrics 00:10:03.822 rmmod nvme_keyring 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1858021 ']' 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1858021 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1858021 ']' 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1858021 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1858021 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1858021' 00:10:03.822 killing process with pid 1858021 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1858021 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1858021 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.822 12:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:06.359 00:10:06.359 real 0m10.356s 00:10:06.359 user 0m16.553s 00:10:06.359 sys 0m5.771s 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.359 ************************************ 00:10:06.359 END TEST nvmf_bdev_io_wait 00:10:06.359 ************************************ 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.359 ************************************ 00:10:06.359 START TEST nvmf_queue_depth 00:10:06.359 ************************************ 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:06.359 * Looking for test storage... 00:10:06.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.359 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.360 --rc genhtml_branch_coverage=1 00:10:06.360 --rc genhtml_function_coverage=1 00:10:06.360 --rc genhtml_legend=1 00:10:06.360 --rc geninfo_all_blocks=1 00:10:06.360 --rc geninfo_unexecuted_blocks=1 00:10:06.360 00:10:06.360 ' 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.360 --rc genhtml_branch_coverage=1 00:10:06.360 --rc genhtml_function_coverage=1 00:10:06.360 --rc genhtml_legend=1 00:10:06.360 --rc geninfo_all_blocks=1 00:10:06.360 --rc geninfo_unexecuted_blocks=1 00:10:06.360 00:10:06.360 ' 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.360 --rc genhtml_branch_coverage=1 00:10:06.360 --rc genhtml_function_coverage=1 00:10:06.360 --rc genhtml_legend=1 00:10:06.360 --rc geninfo_all_blocks=1 00:10:06.360 --rc geninfo_unexecuted_blocks=1 00:10:06.360 00:10:06.360 ' 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.360 --rc genhtml_branch_coverage=1 00:10:06.360 --rc genhtml_function_coverage=1 00:10:06.360 --rc genhtml_legend=1 00:10:06.360 --rc geninfo_all_blocks=1 00:10:06.360 --rc geninfo_unexecuted_blocks=1 00:10:06.360 00:10:06.360 ' 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:06.360 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.361 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.361 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.361 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:06.361 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:06.361 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:06.361 12:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.637 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.637 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.637 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.637 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.637 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.637 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:11.638 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:11.638 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:11.638 Found net devices under 0000:86:00.0: cvl_0_0 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:11.638 Found net devices under 0000:86:00.1: cvl_0_1 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:11.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:10:11.638 00:10:11.638 --- 10.0.0.2 ping statistics --- 00:10:11.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.638 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:10:11.638 00:10:11.638 --- 10.0.0.1 ping statistics --- 00:10:11.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.638 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.638 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1862291 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1862291 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1862291 ']' 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.639 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:11.898 [2024-11-29 12:54:11.480256] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:10:11.898 [2024-11-29 12:54:11.480302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.898 [2024-11-29 12:54:11.549327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.898 [2024-11-29 12:54:11.590333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.898 [2024-11-29 12:54:11.590371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.898 [2024-11-29 12:54:11.590378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.898 [2024-11-29 12:54:11.590384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.898 [2024-11-29 12:54:11.590389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.898 [2024-11-29 12:54:11.590917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.898 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.898 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:11.898 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.898 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.898 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.157 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.157 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:12.157 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.157 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.157 [2024-11-29 12:54:11.724803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.157 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.158 Malloc0 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.158 [2024-11-29 12:54:11.771216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1862404 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1862404 /var/tmp/bdevperf.sock 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1862404 ']' 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:12.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.158 12:54:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:12.158 [2024-11-29 12:54:11.823593] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:10:12.158 [2024-11-29 12:54:11.823636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1862404 ] 00:10:12.158 [2024-11-29 12:54:11.886189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.158 [2024-11-29 12:54:11.929521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.417 12:54:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.417 12:54:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:12.418 12:54:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:12.418 12:54:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.418 12:54:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.677 NVMe0n1 00:10:12.677 12:54:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.677 12:54:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:12.677 Running I/O for 10 seconds... 00:10:14.551 11264.00 IOPS, 44.00 MiB/s [2024-11-29T11:54:15.748Z] 11731.50 IOPS, 45.83 MiB/s [2024-11-29T11:54:16.684Z] 11897.00 IOPS, 46.47 MiB/s [2024-11-29T11:54:17.623Z] 11940.75 IOPS, 46.64 MiB/s [2024-11-29T11:54:18.560Z] 11955.80 IOPS, 46.70 MiB/s [2024-11-29T11:54:19.496Z] 11983.50 IOPS, 46.81 MiB/s [2024-11-29T11:54:20.433Z] 12002.00 IOPS, 46.88 MiB/s [2024-11-29T11:54:21.810Z] 12006.62 IOPS, 46.90 MiB/s [2024-11-29T11:54:22.377Z] 12034.56 IOPS, 47.01 MiB/s [2024-11-29T11:54:22.637Z] 12032.20 IOPS, 47.00 MiB/s 00:10:22.817 Latency(us) 00:10:22.817 [2024-11-29T11:54:22.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.817 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:22.817 Verification LBA range: start 0x0 length 0x4000 00:10:22.817 NVMe0n1 : 10.06 12042.00 47.04 0.00 0.00 84703.34 18805.98 57215.78 00:10:22.817 [2024-11-29T11:54:22.637Z] =================================================================================================================== 00:10:22.817 [2024-11-29T11:54:22.637Z] Total : 12042.00 47.04 0.00 0.00 84703.34 18805.98 57215.78 00:10:22.817 { 00:10:22.817 "results": [ 00:10:22.817 { 00:10:22.817 "job": "NVMe0n1", 00:10:22.817 "core_mask": "0x1", 00:10:22.817 "workload": "verify", 00:10:22.817 "status": "finished", 00:10:22.817 "verify_range": { 00:10:22.817 "start": 0, 00:10:22.817 "length": 16384 00:10:22.817 }, 00:10:22.817 "queue_depth": 1024, 00:10:22.817 "io_size": 4096, 00:10:22.817 "runtime": 10.059956, 00:10:22.817 "iops": 12042.000978930722, 00:10:22.817 "mibps": 47.039066323948134, 00:10:22.817 "io_failed": 0, 00:10:22.817 "io_timeout": 0, 00:10:22.817 "avg_latency_us": 84703.3422153951, 00:10:22.817 "min_latency_us": 18805.982608695653, 00:10:22.817 "max_latency_us": 57215.77739130435 00:10:22.817 } 00:10:22.817 ], 00:10:22.817 "core_count": 1 00:10:22.817 } 00:10:22.817 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1862404 00:10:22.817 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1862404 ']' 00:10:22.817 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1862404 00:10:22.817 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:22.817 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.817 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1862404 00:10:22.817 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.817 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.817 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1862404' 00:10:22.817 killing process with pid 1862404 00:10:22.817 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1862404 00:10:22.817 Received shutdown signal, test time was about 10.000000 seconds 00:10:22.817 00:10:22.817 Latency(us) 00:10:22.817 [2024-11-29T11:54:22.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.817 [2024-11-29T11:54:22.637Z] =================================================================================================================== 00:10:22.817 [2024-11-29T11:54:22.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:22.817 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1862404 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:23.076 rmmod nvme_tcp 00:10:23.076 rmmod nvme_fabrics 00:10:23.076 rmmod nvme_keyring 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1862291 ']' 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1862291 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1862291 ']' 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1862291 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1862291 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1862291' 00:10:23.076 killing process with pid 1862291 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1862291 00:10:23.076 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1862291 00:10:23.335 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:23.335 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:23.335 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:23.335 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:23.335 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:23.335 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:23.335 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:23.335 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:23.335 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:23.335 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.335 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.335 12:54:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.239 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:25.239 00:10:25.239 real 0m19.258s 00:10:25.239 user 0m23.019s 00:10:25.239 sys 0m5.671s 00:10:25.239 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.239 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.239 ************************************ 00:10:25.239 END TEST nvmf_queue_depth 00:10:25.239 ************************************ 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:25.498 ************************************ 00:10:25.498 START TEST nvmf_target_multipath 00:10:25.498 ************************************ 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:25.498 * Looking for test storage... 00:10:25.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.498 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:25.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.499 --rc genhtml_branch_coverage=1 00:10:25.499 --rc genhtml_function_coverage=1 00:10:25.499 --rc genhtml_legend=1 00:10:25.499 --rc geninfo_all_blocks=1 00:10:25.499 --rc geninfo_unexecuted_blocks=1 00:10:25.499 00:10:25.499 ' 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:25.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.499 --rc genhtml_branch_coverage=1 00:10:25.499 --rc genhtml_function_coverage=1 00:10:25.499 --rc genhtml_legend=1 00:10:25.499 --rc geninfo_all_blocks=1 00:10:25.499 --rc geninfo_unexecuted_blocks=1 00:10:25.499 00:10:25.499 ' 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:25.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.499 --rc genhtml_branch_coverage=1 00:10:25.499 --rc genhtml_function_coverage=1 00:10:25.499 --rc genhtml_legend=1 00:10:25.499 --rc geninfo_all_blocks=1 00:10:25.499 --rc geninfo_unexecuted_blocks=1 00:10:25.499 00:10:25.499 ' 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:25.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.499 --rc genhtml_branch_coverage=1 00:10:25.499 --rc genhtml_function_coverage=1 00:10:25.499 --rc genhtml_legend=1 00:10:25.499 --rc geninfo_all_blocks=1 00:10:25.499 --rc geninfo_unexecuted_blocks=1 00:10:25.499 00:10:25.499 ' 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:25.499 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:25.500 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:25.500 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.500 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.500 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.500 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:25.500 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:25.500 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:25.500 12:54:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:32.065 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.065 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:32.065 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:32.065 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:32.066 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:32.066 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:32.066 Found net devices under 0000:86:00.0: cvl_0_0 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:32.066 Found net devices under 0000:86:00.1: cvl_0_1 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.066 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:32.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:10:32.067 00:10:32.067 --- 10.0.0.2 ping statistics --- 00:10:32.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.067 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:10:32.067 00:10:32.067 --- 10.0.0.1 ping statistics --- 00:10:32.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.067 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:32.067 12:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:32.067 only one NIC for nvmf test 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:32.067 rmmod nvme_tcp 00:10:32.067 rmmod nvme_fabrics 00:10:32.067 rmmod nvme_keyring 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.067 12:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.488 00:10:33.488 real 0m8.063s 00:10:33.488 user 0m1.719s 00:10:33.488 sys 0m4.299s 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:33.488 ************************************ 00:10:33.488 END TEST nvmf_target_multipath 00:10:33.488 ************************************ 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.488 ************************************ 00:10:33.488 START TEST nvmf_zcopy 00:10:33.488 ************************************ 00:10:33.488 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:33.488 * Looking for test storage... 00:10:33.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:33.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.748 --rc genhtml_branch_coverage=1 00:10:33.748 --rc genhtml_function_coverage=1 00:10:33.748 --rc genhtml_legend=1 00:10:33.748 --rc geninfo_all_blocks=1 00:10:33.748 --rc geninfo_unexecuted_blocks=1 00:10:33.748 00:10:33.748 ' 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:33.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.748 --rc genhtml_branch_coverage=1 00:10:33.748 --rc genhtml_function_coverage=1 00:10:33.748 --rc genhtml_legend=1 00:10:33.748 --rc geninfo_all_blocks=1 00:10:33.748 --rc geninfo_unexecuted_blocks=1 00:10:33.748 00:10:33.748 ' 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:33.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.748 --rc genhtml_branch_coverage=1 00:10:33.748 --rc genhtml_function_coverage=1 00:10:33.748 --rc genhtml_legend=1 00:10:33.748 --rc geninfo_all_blocks=1 00:10:33.748 --rc geninfo_unexecuted_blocks=1 00:10:33.748 00:10:33.748 ' 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:33.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.748 --rc genhtml_branch_coverage=1 00:10:33.748 --rc genhtml_function_coverage=1 00:10:33.748 --rc genhtml_legend=1 00:10:33.748 --rc geninfo_all_blocks=1 00:10:33.748 --rc geninfo_unexecuted_blocks=1 00:10:33.748 00:10:33.748 ' 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.748 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:33.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:33.749 12:54:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:40.374 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:40.374 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.374 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:40.375 Found net devices under 0000:86:00.0: cvl_0_0 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:40.375 Found net devices under 0000:86:00.1: cvl_0_1 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:10:40.375 00:10:40.375 --- 10.0.0.2 ping statistics --- 00:10:40.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.375 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:10:40.375 00:10:40.375 --- 10.0.0.1 ping statistics --- 00:10:40.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.375 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1871294 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1871294 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1871294 ']' 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.375 [2024-11-29 12:54:39.468703] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:10:40.375 [2024-11-29 12:54:39.468745] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.375 [2024-11-29 12:54:39.537048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.375 [2024-11-29 12:54:39.576067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.375 [2024-11-29 12:54:39.576103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.375 [2024-11-29 12:54:39.576110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.375 [2024-11-29 12:54:39.576116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.375 [2024-11-29 12:54:39.576125] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.375 [2024-11-29 12:54:39.576695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.375 [2024-11-29 12:54:39.709996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.375 [2024-11-29 12:54:39.730209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.375 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.376 malloc0 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:40.376 { 00:10:40.376 "params": { 00:10:40.376 "name": "Nvme$subsystem", 00:10:40.376 "trtype": "$TEST_TRANSPORT", 00:10:40.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:40.376 "adrfam": "ipv4", 00:10:40.376 "trsvcid": "$NVMF_PORT", 00:10:40.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:40.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:40.376 "hdgst": ${hdgst:-false}, 00:10:40.376 "ddgst": ${ddgst:-false} 00:10:40.376 }, 00:10:40.376 "method": "bdev_nvme_attach_controller" 00:10:40.376 } 00:10:40.376 EOF 00:10:40.376 )") 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:40.376 12:54:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:40.376 "params": { 00:10:40.376 "name": "Nvme1", 00:10:40.376 "trtype": "tcp", 00:10:40.376 "traddr": "10.0.0.2", 00:10:40.376 "adrfam": "ipv4", 00:10:40.376 "trsvcid": "4420", 00:10:40.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:40.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:40.376 "hdgst": false, 00:10:40.376 "ddgst": false 00:10:40.376 }, 00:10:40.376 "method": "bdev_nvme_attach_controller" 00:10:40.376 }' 00:10:40.376 [2024-11-29 12:54:39.813750] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:10:40.376 [2024-11-29 12:54:39.813792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871448 ] 00:10:40.376 [2024-11-29 12:54:39.875039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.376 [2024-11-29 12:54:39.916452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.376 Running I/O for 10 seconds... 00:10:42.327 8380.00 IOPS, 65.47 MiB/s [2024-11-29T11:54:43.527Z] 8497.00 IOPS, 66.38 MiB/s [2024-11-29T11:54:44.465Z] 8525.67 IOPS, 66.61 MiB/s [2024-11-29T11:54:45.402Z] 8545.75 IOPS, 66.76 MiB/s [2024-11-29T11:54:46.338Z] 8558.40 IOPS, 66.86 MiB/s [2024-11-29T11:54:47.274Z] 8565.17 IOPS, 66.92 MiB/s [2024-11-29T11:54:48.212Z] 8564.14 IOPS, 66.91 MiB/s [2024-11-29T11:54:49.149Z] 8569.62 IOPS, 66.95 MiB/s [2024-11-29T11:54:50.528Z] 8574.33 IOPS, 66.99 MiB/s [2024-11-29T11:54:50.528Z] 8578.70 IOPS, 67.02 MiB/s 00:10:50.708 Latency(us) 00:10:50.708 [2024-11-29T11:54:50.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.708 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:50.708 Verification LBA range: start 0x0 length 0x1000 00:10:50.708 Nvme1n1 : 10.01 8581.91 67.05 0.00 0.00 14872.12 2407.74 24960.67 00:10:50.708 [2024-11-29T11:54:50.528Z] =================================================================================================================== 00:10:50.708 [2024-11-29T11:54:50.528Z] Total : 8581.91 67.05 0.00 0.00 14872.12 2407.74 24960.67 00:10:50.708 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1873080 00:10:50.708 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:50.708 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:50.708 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:50.708 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:50.708 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:50.708 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:50.708 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:50.708 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:50.708 { 00:10:50.708 "params": { 00:10:50.708 "name": "Nvme$subsystem", 00:10:50.708 "trtype": "$TEST_TRANSPORT", 00:10:50.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:50.709 "adrfam": "ipv4", 00:10:50.709 "trsvcid": "$NVMF_PORT", 00:10:50.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:50.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:50.709 "hdgst": ${hdgst:-false}, 00:10:50.709 "ddgst": ${ddgst:-false} 00:10:50.709 }, 00:10:50.709 "method": "bdev_nvme_attach_controller" 00:10:50.709 } 00:10:50.709 EOF 00:10:50.709 )") 00:10:50.709 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:50.709 [2024-11-29 12:54:50.325606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.325639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:50.709 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:50.709 12:54:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:50.709 "params": { 00:10:50.709 "name": "Nvme1", 00:10:50.709 "trtype": "tcp", 00:10:50.709 "traddr": "10.0.0.2", 00:10:50.709 "adrfam": "ipv4", 00:10:50.709 "trsvcid": "4420", 00:10:50.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:50.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:50.709 "hdgst": false, 00:10:50.709 "ddgst": false 00:10:50.709 }, 00:10:50.709 "method": "bdev_nvme_attach_controller" 00:10:50.709 }' 00:10:50.709 [2024-11-29 12:54:50.337597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.337610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.349624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.349634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.361656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.361667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.363150] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:10:50.709 [2024-11-29 12:54:50.363194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1873080 ] 00:10:50.709 [2024-11-29 12:54:50.373687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.373699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.385720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.385731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.397753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.397765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.409788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.409800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.421818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.421829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.427505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.709 [2024-11-29 12:54:50.433851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.433862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.445884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.445899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.457915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.457926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.469630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.709 [2024-11-29 12:54:50.469954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.469967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.481997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.482014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.494024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.494042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.506217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.506242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.709 [2024-11-29 12:54:50.518235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.709 [2024-11-29 12:54:50.518248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.969 [2024-11-29 12:54:50.530267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.969 [2024-11-29 12:54:50.530280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.969 [2024-11-29 12:54:50.542297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.969 [2024-11-29 12:54:50.542309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.969 [2024-11-29 12:54:50.554328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.969 [2024-11-29 12:54:50.554338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.969 [2024-11-29 12:54:50.566387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.969 [2024-11-29 12:54:50.566411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.969 [2024-11-29 12:54:50.578398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.969 [2024-11-29 12:54:50.578415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.969 [2024-11-29 12:54:50.590430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.969 [2024-11-29 12:54:50.590446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.969 [2024-11-29 12:54:50.602464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.969 [2024-11-29 12:54:50.602479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.969 [2024-11-29 12:54:50.614492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.969 [2024-11-29 12:54:50.614505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.969 [2024-11-29 12:54:50.626529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.969 [2024-11-29 12:54:50.626548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.969 Running I/O for 5 seconds... 00:10:50.969 [2024-11-29 12:54:50.638563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.969 [2024-11-29 12:54:50.638574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.969 [2024-11-29 12:54:50.653723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.970 [2024-11-29 12:54:50.653744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.970 [2024-11-29 12:54:50.668474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.970 [2024-11-29 12:54:50.668499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.970 [2024-11-29 12:54:50.682591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.970 [2024-11-29 12:54:50.682612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.970 [2024-11-29 12:54:50.696787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.970 [2024-11-29 12:54:50.696807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.970 [2024-11-29 12:54:50.707491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.970 [2024-11-29 12:54:50.707511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.970 [2024-11-29 12:54:50.722147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.970 [2024-11-29 12:54:50.722167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.970 [2024-11-29 12:54:50.736284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.970 [2024-11-29 12:54:50.736304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.970 [2024-11-29 12:54:50.750637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.970 [2024-11-29 12:54:50.750656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.970 [2024-11-29 12:54:50.761836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.970 [2024-11-29 12:54:50.761856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.970 [2024-11-29 12:54:50.776323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.970 [2024-11-29 12:54:50.776342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.970 [2024-11-29 12:54:50.790517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.970 [2024-11-29 12:54:50.790537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.801413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.801432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.816112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.816131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.829720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.829740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.842999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.843019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.857102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.857122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.871102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.871121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.881861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.881880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.896171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.896189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.910509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.910528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.921334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.921353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.936119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.936138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.946413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.946432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.960514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.960534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.974349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.974368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:50.988694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:50.988713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:51.002648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:51.002670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:51.016545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:51.016563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:51.030511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:51.030531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.229 [2024-11-29 12:54:51.041519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.229 [2024-11-29 12:54:51.041539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.056392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.056412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.067146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.067166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.081891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.081911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.095899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.095919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.107104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.107128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.121517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.121537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.135372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.135393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.149469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.149488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.163602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.163621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.177348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.177367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.191417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.191437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.205395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.205420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.219555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.219575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.230542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.489 [2024-11-29 12:54:51.230562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.489 [2024-11-29 12:54:51.244650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.490 [2024-11-29 12:54:51.244670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.490 [2024-11-29 12:54:51.258738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.490 [2024-11-29 12:54:51.258758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.490 [2024-11-29 12:54:51.272627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.490 [2024-11-29 12:54:51.272646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.490 [2024-11-29 12:54:51.286722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.490 [2024-11-29 12:54:51.286742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.490 [2024-11-29 12:54:51.300414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.490 [2024-11-29 12:54:51.300434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.749 [2024-11-29 12:54:51.314655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.749 [2024-11-29 12:54:51.314675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.749 [2024-11-29 12:54:51.328579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.749 [2024-11-29 12:54:51.328598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.749 [2024-11-29 12:54:51.342234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.749 [2024-11-29 12:54:51.342255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.749 [2024-11-29 12:54:51.355941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.749 [2024-11-29 12:54:51.355967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.749 [2024-11-29 12:54:51.369935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.369959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.383711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.383730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.397989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.398008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.411549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.411568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.425608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.425627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.439917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.439937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.450620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.450640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.465072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.465097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.479116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.479146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.492696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.492715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.507197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.507217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.522787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.522807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.537238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.537259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.551383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.551403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.750 [2024-11-29 12:54:51.565027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.750 [2024-11-29 12:54:51.565046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.579122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.579141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.593566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.593585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.604123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.604142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.618805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.618824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.632799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.632819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 16434.00 IOPS, 128.39 MiB/s [2024-11-29T11:54:51.829Z] [2024-11-29 12:54:51.646779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.646798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.660868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.660887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.674993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.675012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.689085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.689104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.703475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.703494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.718408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.718426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.733909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.733933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.748187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.748207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.762326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.762346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.776380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.776401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.790843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.790864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.802287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.802306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.009 [2024-11-29 12:54:51.817020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.009 [2024-11-29 12:54:51.817039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.268 [2024-11-29 12:54:51.831619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.268 [2024-11-29 12:54:51.831638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.268 [2024-11-29 12:54:51.846926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.268 [2024-11-29 12:54:51.846946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.268 [2024-11-29 12:54:51.861398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.268 [2024-11-29 12:54:51.861418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.268 [2024-11-29 12:54:51.875960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.268 [2024-11-29 12:54:51.875980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.268 [2024-11-29 12:54:51.887398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.268 [2024-11-29 12:54:51.887418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.268 [2024-11-29 12:54:51.902125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.268 [2024-11-29 12:54:51.902145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.269 [2024-11-29 12:54:51.916629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.269 [2024-11-29 12:54:51.916649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.269 [2024-11-29 12:54:51.932668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.269 [2024-11-29 12:54:51.932689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.269 [2024-11-29 12:54:51.946453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.269 [2024-11-29 12:54:51.946474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.269 [2024-11-29 12:54:51.960792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.269 [2024-11-29 12:54:51.960813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.269 [2024-11-29 12:54:51.974564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.269 [2024-11-29 12:54:51.974584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.269 [2024-11-29 12:54:51.988517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.269 [2024-11-29 12:54:51.988537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.269 [2024-11-29 12:54:52.002915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.269 [2024-11-29 12:54:52.002935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.269 [2024-11-29 12:54:52.013748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.269 [2024-11-29 12:54:52.013769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.269 [2024-11-29 12:54:52.028146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.269 [2024-11-29 12:54:52.028165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.269 [2024-11-29 12:54:52.042583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.269 [2024-11-29 12:54:52.042603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.269 [2024-11-29 12:54:52.056711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.269 [2024-11-29 12:54:52.056731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.269 [2024-11-29 12:54:52.070938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.269 [2024-11-29 12:54:52.070964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.269 [2024-11-29 12:54:52.084495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.269 [2024-11-29 12:54:52.084514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.528 [2024-11-29 12:54:52.098817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.528 [2024-11-29 12:54:52.098837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.528 [2024-11-29 12:54:52.112316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.528 [2024-11-29 12:54:52.112337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.528 [2024-11-29 12:54:52.126631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.528 [2024-11-29 12:54:52.126651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.528 [2024-11-29 12:54:52.140375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.528 [2024-11-29 12:54:52.140395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.528 [2024-11-29 12:54:52.154582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.528 [2024-11-29 12:54:52.154601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.528 [2024-11-29 12:54:52.168356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.528 [2024-11-29 12:54:52.168375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.529 [2024-11-29 12:54:52.182639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.529 [2024-11-29 12:54:52.182658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.529 [2024-11-29 12:54:52.194562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.529 [2024-11-29 12:54:52.194580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.529 [2024-11-29 12:54:52.209432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.529 [2024-11-29 12:54:52.209452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.529 [2024-11-29 12:54:52.220180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.529 [2024-11-29 12:54:52.220199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.529 [2024-11-29 12:54:52.234841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.529 [2024-11-29 12:54:52.234860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.529 [2024-11-29 12:54:52.249005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.529 [2024-11-29 12:54:52.249025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.529 [2024-11-29 12:54:52.263271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.529 [2024-11-29 12:54:52.263290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.529 [2024-11-29 12:54:52.278496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.529 [2024-11-29 12:54:52.278514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.529 [2024-11-29 12:54:52.292630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.529 [2024-11-29 12:54:52.292648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.529 [2024-11-29 12:54:52.306405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.529 [2024-11-29 12:54:52.306423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.529 [2024-11-29 12:54:52.320881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.529 [2024-11-29 12:54:52.320901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.529 [2024-11-29 12:54:52.334973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.529 [2024-11-29 12:54:52.334993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.529 [2024-11-29 12:54:52.348943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.529 [2024-11-29 12:54:52.348970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.363075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.363095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.374678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.374697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.389338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.389357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.400346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.400365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.415318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.415337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.426080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.426099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.440655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.440674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.455095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.455114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.471075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.471095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.485475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.485495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.499466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.499485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.513956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.513975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.528388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.528408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.542965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.542984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.558585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.558606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.573024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.573044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.584197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.584227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.789 [2024-11-29 12:54:52.599215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.789 [2024-11-29 12:54:52.599234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.614575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.614595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.629039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.629058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.640303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.640323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 16428.50 IOPS, 128.35 MiB/s [2024-11-29T11:54:52.870Z] [2024-11-29 12:54:52.655393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.655412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.670689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.670709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.685311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.685330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.696571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.696590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.710889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.710907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.724863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.724882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.739135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.739154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.753191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.753221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.767672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.767691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.782495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.782519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.796776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.796795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.811113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.811133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.822203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.822222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.837511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.837530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.852631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.852650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.050 [2024-11-29 12:54:52.866941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.050 [2024-11-29 12:54:52.866967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.310 [2024-11-29 12:54:52.881074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.310 [2024-11-29 12:54:52.881093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.310 [2024-11-29 12:54:52.895637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.310 [2024-11-29 12:54:52.895657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.310 [2024-11-29 12:54:52.906717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.310 [2024-11-29 12:54:52.906736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.310 [2024-11-29 12:54:52.920829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.310 [2024-11-29 12:54:52.920849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.310 [2024-11-29 12:54:52.934724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.310 [2024-11-29 12:54:52.934743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.310 [2024-11-29 12:54:52.949099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.310 [2024-11-29 12:54:52.949118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.311 [2024-11-29 12:54:52.959841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.311 [2024-11-29 12:54:52.959860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.311 [2024-11-29 12:54:52.974626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.311 [2024-11-29 12:54:52.974646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.311 [2024-11-29 12:54:52.985269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.311 [2024-11-29 12:54:52.985290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.311 [2024-11-29 12:54:52.999758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.311 [2024-11-29 12:54:52.999778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.311 [2024-11-29 12:54:53.013944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.311 [2024-11-29 12:54:53.013969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.311 [2024-11-29 12:54:53.028629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.311 [2024-11-29 12:54:53.028648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.311 [2024-11-29 12:54:53.039203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.311 [2024-11-29 12:54:53.039226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.311 [2024-11-29 12:54:53.053614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.311 [2024-11-29 12:54:53.053634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.311 [2024-11-29 12:54:53.067595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.311 [2024-11-29 12:54:53.067614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.311 [2024-11-29 12:54:53.082143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.311 [2024-11-29 12:54:53.082162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.311 [2024-11-29 12:54:53.097491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.311 [2024-11-29 12:54:53.097511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.311 [2024-11-29 12:54:53.111932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.311 [2024-11-29 12:54:53.111957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.311 [2024-11-29 12:54:53.125864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.311 [2024-11-29 12:54:53.125883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.139924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.139943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.153605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.153626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.167692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.167713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.181516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.181536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.195863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.195883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.206684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.206704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.221203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.221223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.235080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.235099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.249204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.249225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.263531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.263551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.277646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.277665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.291741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.291761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.305790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.305815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.319996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.320016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.334082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.334101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.348386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.348406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.362314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.362335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.376471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.376491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.571 [2024-11-29 12:54:53.390756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.571 [2024-11-29 12:54:53.390776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.404496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.404516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.418696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.418716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.429870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.429889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.444360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.444380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.458177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.458196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.472358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.472377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.486622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.486642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.500711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.500731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.515085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.515105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.526451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.526470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.541031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.541051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.554339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.554359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.568410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.568434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.582409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.582429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.596580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.596600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.610271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.610292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.624254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.624273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.831 [2024-11-29 12:54:53.638511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.831 [2024-11-29 12:54:53.638530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.090 16458.00 IOPS, 128.58 MiB/s [2024-11-29T11:54:53.910Z] [2024-11-29 12:54:53.652384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.090 [2024-11-29 12:54:53.652403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.090 [2024-11-29 12:54:53.666092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.090 [2024-11-29 12:54:53.666112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.090 [2024-11-29 12:54:53.680310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.090 [2024-11-29 12:54:53.680329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.090 [2024-11-29 12:54:53.694381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.090 [2024-11-29 12:54:53.694400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.090 [2024-11-29 12:54:53.708792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.090 [2024-11-29 12:54:53.708812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.090 [2024-11-29 12:54:53.720077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.090 [2024-11-29 12:54:53.720096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.090 [2024-11-29 12:54:53.734570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.090 [2024-11-29 12:54:53.734589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.090 [2024-11-29 12:54:53.748492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.090 [2024-11-29 12:54:53.748512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.090 [2024-11-29 12:54:53.762813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.090 [2024-11-29 12:54:53.762832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.090 [2024-11-29 12:54:53.773923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.090 [2024-11-29 12:54:53.773943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.090 [2024-11-29 12:54:53.788380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.090 [2024-11-29 12:54:53.788400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.091 [2024-11-29 12:54:53.802642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.091 [2024-11-29 12:54:53.802661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.091 [2024-11-29 12:54:53.816192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.091 [2024-11-29 12:54:53.816213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.091 [2024-11-29 12:54:53.830416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.091 [2024-11-29 12:54:53.830436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.091 [2024-11-29 12:54:53.844308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.091 [2024-11-29 12:54:53.844327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.091 [2024-11-29 12:54:53.858403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.091 [2024-11-29 12:54:53.858422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.091 [2024-11-29 12:54:53.872306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.091 [2024-11-29 12:54:53.872325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.091 [2024-11-29 12:54:53.886718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.091 [2024-11-29 12:54:53.886737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.091 [2024-11-29 12:54:53.900630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.091 [2024-11-29 12:54:53.900649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:53.915439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:53.915458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:53.930615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:53.930635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:53.944648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:53.944667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:53.958712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:53.958732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:53.972576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:53.972594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:53.986793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:53.986812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:54.000954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:54.000973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:54.014993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:54.015012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:54.029187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:54.029206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:54.043433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:54.043452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:54.057898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:54.057917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:54.073366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:54.073385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:54.087540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:54.087560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:54.101423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:54.101443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:54.115302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:54.115321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:54.129178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:54.129199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:54.143150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:54.143170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:54.157331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:54.157350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.351 [2024-11-29 12:54:54.168290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.351 [2024-11-29 12:54:54.168310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.183036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.183056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.196554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.196574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.210927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.210953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.225260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.225280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.239547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.239566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.250300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.250319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.265003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.265022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.279165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.279184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.293847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.293865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.309912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.309931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.324394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.324413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.335315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.335335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.350027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.350050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.361026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.361045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.375382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.375401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.389715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.389734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.400437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.400456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.414992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.415011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.611 [2024-11-29 12:54:54.428792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.611 [2024-11-29 12:54:54.428812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.439896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.439915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.454350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.454369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.465566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.465586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.480109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.480129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.494012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.494031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.507563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.507582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.521594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.521613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.536037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.536058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.549992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.550011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.564464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.564486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.578187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.578208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.592483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.592504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.606120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.606145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.620690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.620710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 [2024-11-29 12:54:54.635861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.870 [2024-11-29 12:54:54.635882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.870 16484.00 IOPS, 128.78 MiB/s [2024-11-29T11:54:54.691Z] [2024-11-29 12:54:54.650058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.871 [2024-11-29 12:54:54.650078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.871 [2024-11-29 12:54:54.664372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.871 [2024-11-29 12:54:54.664391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.871 [2024-11-29 12:54:54.678077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.871 [2024-11-29 12:54:54.678097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.129 [2024-11-29 12:54:54.691891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.129 [2024-11-29 12:54:54.691912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.129 [2024-11-29 12:54:54.706200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.129 [2024-11-29 12:54:54.706220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.129 [2024-11-29 12:54:54.720254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.129 [2024-11-29 12:54:54.720273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.129 [2024-11-29 12:54:54.734014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.129 [2024-11-29 12:54:54.734034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.129 [2024-11-29 12:54:54.748674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.129 [2024-11-29 12:54:54.748693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.129 [2024-11-29 12:54:54.763938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.129 [2024-11-29 12:54:54.763964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.130 [2024-11-29 12:54:54.777975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.130 [2024-11-29 12:54:54.777995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.130 [2024-11-29 12:54:54.791911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.130 [2024-11-29 12:54:54.791931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.130 [2024-11-29 12:54:54.806125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.130 [2024-11-29 12:54:54.806144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.130 [2024-11-29 12:54:54.819856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.130 [2024-11-29 12:54:54.819876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.130 [2024-11-29 12:54:54.834445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.130 [2024-11-29 12:54:54.834465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.130 [2024-11-29 12:54:54.845746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.130 [2024-11-29 12:54:54.845766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.130 [2024-11-29 12:54:54.860294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.130 [2024-11-29 12:54:54.860314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.130 [2024-11-29 12:54:54.874419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.130 [2024-11-29 12:54:54.874444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.130 [2024-11-29 12:54:54.885338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.130 [2024-11-29 12:54:54.885357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.130 [2024-11-29 12:54:54.900053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.130 [2024-11-29 12:54:54.900073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.130 [2024-11-29 12:54:54.914130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.130 [2024-11-29 12:54:54.914150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.130 [2024-11-29 12:54:54.927511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.130 [2024-11-29 12:54:54.927530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.130 [2024-11-29 12:54:54.941722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.130 [2024-11-29 12:54:54.941741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:54.955799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:54.955818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:54.969877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:54.969896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:54.983634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:54.983653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:54.997883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:54.997902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.011839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.011859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.026011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.026030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.040107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.040126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.054169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.054189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.068072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.068092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.082370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.082389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.092911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.092930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.107314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.107332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.121138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.121157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.135499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.135518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.149281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.149300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.163676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.163697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.175011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.175033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.189402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.189422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.389 [2024-11-29 12:54:55.203346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.389 [2024-11-29 12:54:55.203366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.647 [2024-11-29 12:54:55.217815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.647 [2024-11-29 12:54:55.217834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.229018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.229037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.243824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.243843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.255037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.255055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.269475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.269495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.283255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.283274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.297621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.297640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.308931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.308955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.323340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.323359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.337331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.337350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.351195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.351214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.365243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.365262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.379382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.379401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.393207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.393227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.407287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.407306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.421193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.421212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.435215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.435235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.449111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.449131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.648 [2024-11-29 12:54:55.463568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.648 [2024-11-29 12:54:55.463587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.475100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.475119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.489765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.489784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.500943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.500971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.515460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.515479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.526421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.526440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.541311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.541331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.552016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.552035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.566517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.566535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.580266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.580285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.594417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.594436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.608224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.608244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.622829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.622848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.633931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.633957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.648644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.648664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 16501.20 IOPS, 128.92 MiB/s 00:10:55.907 Latency(us) 00:10:55.907 [2024-11-29T11:54:55.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.907 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:55.907 Nvme1n1 : 5.01 16503.19 128.93 0.00 0.00 7748.94 3704.21 16184.54 00:10:55.907 [2024-11-29T11:54:55.727Z] =================================================================================================================== 00:10:55.907 [2024-11-29T11:54:55.727Z] Total : 16503.19 128.93 0.00 0.00 7748.94 3704.21 16184.54 00:10:55.907 [2024-11-29 12:54:55.656763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.656782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.668791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.668808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.680849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.680866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.692869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.692888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.704894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.704907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.907 [2024-11-29 12:54:55.716924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.907 [2024-11-29 12:54:55.716938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.165 [2024-11-29 12:54:55.728957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.165 [2024-11-29 12:54:55.728971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.166 [2024-11-29 12:54:55.740988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.166 [2024-11-29 12:54:55.741003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.166 [2024-11-29 12:54:55.753018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.166 [2024-11-29 12:54:55.753032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.166 [2024-11-29 12:54:55.765048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.166 [2024-11-29 12:54:55.765060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.166 [2024-11-29 12:54:55.777077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.166 [2024-11-29 12:54:55.777088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.166 [2024-11-29 12:54:55.789111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.166 [2024-11-29 12:54:55.789122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.166 [2024-11-29 12:54:55.801140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.166 [2024-11-29 12:54:55.801152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.166 [2024-11-29 12:54:55.813170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.166 [2024-11-29 12:54:55.813180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1873080) - No such process 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1873080 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:56.166 delay0 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.166 12:54:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:56.166 [2024-11-29 12:54:55.955091] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:02.731 Initializing NVMe Controllers 00:11:02.731 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:02.731 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:02.731 Initialization complete. Launching workers. 00:11:02.731 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 111 00:11:02.731 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 382, failed to submit 49 00:11:02.731 success 187, unsuccessful 195, failed 0 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:02.731 rmmod nvme_tcp 00:11:02.731 rmmod nvme_fabrics 00:11:02.731 rmmod nvme_keyring 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1871294 ']' 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1871294 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1871294 ']' 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1871294 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1871294 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1871294' 00:11:02.731 killing process with pid 1871294 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1871294 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1871294 00:11:02.731 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:02.732 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:02.732 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:02.732 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:02.732 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:02.732 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:02.732 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:02.732 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:02.732 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:02.732 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.732 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.732 12:55:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.639 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:04.898 00:11:04.898 real 0m31.227s 00:11:04.898 user 0m41.805s 00:11:04.898 sys 0m10.943s 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.898 ************************************ 00:11:04.898 END TEST nvmf_zcopy 00:11:04.898 ************************************ 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:04.898 ************************************ 00:11:04.898 START TEST nvmf_nmic 00:11:04.898 ************************************ 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:04.898 * Looking for test storage... 00:11:04.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.898 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:04.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.899 --rc genhtml_branch_coverage=1 00:11:04.899 --rc genhtml_function_coverage=1 00:11:04.899 --rc genhtml_legend=1 00:11:04.899 --rc geninfo_all_blocks=1 00:11:04.899 --rc geninfo_unexecuted_blocks=1 00:11:04.899 00:11:04.899 ' 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:04.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.899 --rc genhtml_branch_coverage=1 00:11:04.899 --rc genhtml_function_coverage=1 00:11:04.899 --rc genhtml_legend=1 00:11:04.899 --rc geninfo_all_blocks=1 00:11:04.899 --rc geninfo_unexecuted_blocks=1 00:11:04.899 00:11:04.899 ' 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:04.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.899 --rc genhtml_branch_coverage=1 00:11:04.899 --rc genhtml_function_coverage=1 00:11:04.899 --rc genhtml_legend=1 00:11:04.899 --rc geninfo_all_blocks=1 00:11:04.899 --rc geninfo_unexecuted_blocks=1 00:11:04.899 00:11:04.899 ' 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:04.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.899 --rc genhtml_branch_coverage=1 00:11:04.899 --rc genhtml_function_coverage=1 00:11:04.899 --rc genhtml_legend=1 00:11:04.899 --rc geninfo_all_blocks=1 00:11:04.899 --rc geninfo_unexecuted_blocks=1 00:11:04.899 00:11:04.899 ' 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:04.899 12:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:10.168 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:10.168 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:10.168 Found net devices under 0000:86:00.0: cvl_0_0 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:10.168 Found net devices under 0000:86:00.1: cvl_0_1 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:10.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:11:10.168 00:11:10.168 --- 10.0.0.2 ping statistics --- 00:11:10.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.168 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:11:10.168 00:11:10.168 --- 10.0.0.1 ping statistics --- 00:11:10.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.168 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:10.168 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:10.169 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:10.169 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.169 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.169 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.169 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1878495 00:11:10.169 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1878495 00:11:10.169 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:10.169 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1878495 ']' 00:11:10.169 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.169 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.169 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.169 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.169 12:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.169 [2024-11-29 12:55:09.864004] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:11:10.169 [2024-11-29 12:55:09.864047] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.169 [2024-11-29 12:55:09.929662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.169 [2024-11-29 12:55:09.972200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.169 [2024-11-29 12:55:09.972239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.169 [2024-11-29 12:55:09.972247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.169 [2024-11-29 12:55:09.972254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.169 [2024-11-29 12:55:09.972260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.169 [2024-11-29 12:55:09.973866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.169 [2024-11-29 12:55:09.973885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.169 [2024-11-29 12:55:09.973974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.169 [2024-11-29 12:55:09.973976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.428 [2024-11-29 12:55:10.121661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.428 Malloc0 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.428 [2024-11-29 12:55:10.181017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:10.428 test case1: single bdev can't be used in multiple subsystems 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.428 [2024-11-29 12:55:10.204887] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:10.428 [2024-11-29 12:55:10.204911] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:10.428 [2024-11-29 12:55:10.204918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.428 request: 00:11:10.428 { 00:11:10.428 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:10.428 "namespace": { 00:11:10.428 "bdev_name": "Malloc0", 00:11:10.428 "no_auto_visible": false, 00:11:10.428 "hide_metadata": false 00:11:10.428 }, 00:11:10.428 "method": "nvmf_subsystem_add_ns", 00:11:10.428 "req_id": 1 00:11:10.428 } 00:11:10.428 Got JSON-RPC error response 00:11:10.428 response: 00:11:10.428 { 00:11:10.428 "code": -32602, 00:11:10.428 "message": "Invalid parameters" 00:11:10.428 } 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:10.428 Adding namespace failed - expected result. 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:10.428 test case2: host connect to nvmf target in multiple paths 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.428 [2024-11-29 12:55:10.217022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.428 12:55:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.806 12:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:12.743 12:55:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:12.743 12:55:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:12.743 12:55:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.743 12:55:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:12.743 12:55:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:15.277 12:55:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:15.277 12:55:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:15.277 12:55:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.277 12:55:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:15.277 12:55:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.277 12:55:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:15.277 12:55:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:15.277 [global] 00:11:15.277 thread=1 00:11:15.277 invalidate=1 00:11:15.277 rw=write 00:11:15.277 time_based=1 00:11:15.277 runtime=1 00:11:15.277 ioengine=libaio 00:11:15.277 direct=1 00:11:15.277 bs=4096 00:11:15.277 iodepth=1 00:11:15.277 norandommap=0 00:11:15.277 numjobs=1 00:11:15.277 00:11:15.277 verify_dump=1 00:11:15.277 verify_backlog=512 00:11:15.277 verify_state_save=0 00:11:15.277 do_verify=1 00:11:15.277 verify=crc32c-intel 00:11:15.277 [job0] 00:11:15.277 filename=/dev/nvme0n1 00:11:15.277 Could not set queue depth (nvme0n1) 00:11:15.277 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:15.277 fio-3.35 00:11:15.277 Starting 1 thread 00:11:16.212 00:11:16.212 job0: (groupid=0, jobs=1): err= 0: pid=1879540: Fri Nov 29 12:55:15 2024 00:11:16.212 read: IOPS=749, BW=2997KiB/s (3069kB/s)(3000KiB/1001msec) 00:11:16.212 slat (nsec): min=6520, max=27192, avg=7737.85, stdev=2452.74 00:11:16.212 clat (usec): min=175, max=41380, avg=1094.29, stdev=5894.70 00:11:16.212 lat (usec): min=182, max=41390, avg=1102.02, stdev=5896.71 00:11:16.212 clat percentiles (usec): 00:11:16.212 | 1.00th=[ 182], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 217], 00:11:16.212 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 225], 00:11:16.212 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 249], 95.00th=[ 265], 00:11:16.212 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:16.212 | 99.99th=[41157] 00:11:16.212 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:11:16.212 slat (nsec): min=9829, max=38454, avg=11007.52, stdev=1624.53 00:11:16.212 clat (usec): min=117, max=334, avg=154.37, stdev=17.55 00:11:16.212 lat (usec): min=128, max=372, avg=165.37, stdev=17.98 00:11:16.212 clat percentiles (usec): 00:11:16.212 | 1.00th=[ 122], 5.00th=[ 126], 10.00th=[ 128], 20.00th=[ 133], 00:11:16.212 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:11:16.212 | 70.00th=[ 165], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 176], 00:11:16.212 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 192], 99.95th=[ 334], 00:11:16.212 | 99.99th=[ 334] 00:11:16.212 bw ( KiB/s): min= 4087, max= 4087, per=99.88%, avg=4087.00, stdev= 0.00, samples=1 00:11:16.212 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:16.212 lat (usec) : 250=95.83%, 500=3.27% 00:11:16.212 lat (msec) : 50=0.90% 00:11:16.212 cpu : usr=1.10%, sys=1.90%, ctx=1774, majf=0, minf=1 00:11:16.212 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.212 issued rwts: total=750,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.212 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.212 00:11:16.212 Run status group 0 (all jobs): 00:11:16.212 READ: bw=2997KiB/s (3069kB/s), 2997KiB/s-2997KiB/s (3069kB/s-3069kB/s), io=3000KiB (3072kB), run=1001-1001msec 00:11:16.212 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:11:16.212 00:11:16.212 Disk stats (read/write): 00:11:16.212 nvme0n1: ios=562/736, merge=0/0, ticks=784/111, in_queue=895, util=91.48% 00:11:16.212 12:55:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.473 rmmod nvme_tcp 00:11:16.473 rmmod nvme_fabrics 00:11:16.473 rmmod nvme_keyring 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1878495 ']' 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1878495 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1878495 ']' 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1878495 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.473 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1878495 00:11:16.732 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.732 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.732 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1878495' 00:11:16.732 killing process with pid 1878495 00:11:16.732 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1878495 00:11:16.732 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1878495 00:11:16.732 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.732 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.732 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.732 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:16.732 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:16.733 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.733 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.733 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.733 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.733 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.733 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.733 12:55:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.299 00:11:19.299 real 0m14.023s 00:11:19.299 user 0m32.751s 00:11:19.299 sys 0m4.610s 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:19.299 ************************************ 00:11:19.299 END TEST nvmf_nmic 00:11:19.299 ************************************ 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:19.299 ************************************ 00:11:19.299 START TEST nvmf_fio_target 00:11:19.299 ************************************ 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:19.299 * Looking for test storage... 00:11:19.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:19.299 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:19.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.300 --rc genhtml_branch_coverage=1 00:11:19.300 --rc genhtml_function_coverage=1 00:11:19.300 --rc genhtml_legend=1 00:11:19.300 --rc geninfo_all_blocks=1 00:11:19.300 --rc geninfo_unexecuted_blocks=1 00:11:19.300 00:11:19.300 ' 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:19.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.300 --rc genhtml_branch_coverage=1 00:11:19.300 --rc genhtml_function_coverage=1 00:11:19.300 --rc genhtml_legend=1 00:11:19.300 --rc geninfo_all_blocks=1 00:11:19.300 --rc geninfo_unexecuted_blocks=1 00:11:19.300 00:11:19.300 ' 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:19.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.300 --rc genhtml_branch_coverage=1 00:11:19.300 --rc genhtml_function_coverage=1 00:11:19.300 --rc genhtml_legend=1 00:11:19.300 --rc geninfo_all_blocks=1 00:11:19.300 --rc geninfo_unexecuted_blocks=1 00:11:19.300 00:11:19.300 ' 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:19.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.300 --rc genhtml_branch_coverage=1 00:11:19.300 --rc genhtml_function_coverage=1 00:11:19.300 --rc genhtml_legend=1 00:11:19.300 --rc geninfo_all_blocks=1 00:11:19.300 --rc geninfo_unexecuted_blocks=1 00:11:19.300 00:11:19.300 ' 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.300 12:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:24.574 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:24.574 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:24.574 Found net devices under 0000:86:00.0: cvl_0_0 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.574 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:24.575 Found net devices under 0000:86:00.1: cvl_0_1 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:11:24.575 00:11:24.575 --- 10.0.0.2 ping statistics --- 00:11:24.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.575 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:11:24.575 00:11:24.575 --- 10.0.0.1 ping statistics --- 00:11:24.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.575 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1883299 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1883299 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1883299 ']' 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.575 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.575 [2024-11-29 12:55:24.370917] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:11:24.575 [2024-11-29 12:55:24.370971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.835 [2024-11-29 12:55:24.439507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.835 [2024-11-29 12:55:24.481102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.835 [2024-11-29 12:55:24.481142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.835 [2024-11-29 12:55:24.481150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.835 [2024-11-29 12:55:24.481159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.835 [2024-11-29 12:55:24.481164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.835 [2024-11-29 12:55:24.482732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.835 [2024-11-29 12:55:24.482826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.835 [2024-11-29 12:55:24.482848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.835 [2024-11-29 12:55:24.482847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.835 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.835 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:24.835 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.835 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.835 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.835 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.835 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:25.095 [2024-11-29 12:55:24.794308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.095 12:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:25.354 12:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:25.354 12:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:25.612 12:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:25.612 12:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:25.872 12:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:25.872 12:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:26.129 12:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:26.129 12:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:26.129 12:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:26.387 12:55:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:26.387 12:55:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:26.645 12:55:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:26.645 12:55:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:26.903 12:55:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:26.903 12:55:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:27.161 12:55:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:27.161 12:55:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:27.161 12:55:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.418 12:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:27.418 12:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:27.727 12:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.727 [2024-11-29 12:55:27.520978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.986 12:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:27.986 12:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:28.244 12:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.617 12:55:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:29.617 12:55:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:29.617 12:55:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.617 12:55:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:29.617 12:55:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:29.617 12:55:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:31.522 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:31.523 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:31.523 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:31.523 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:31.523 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:31.523 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:31.523 12:55:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:31.523 [global] 00:11:31.523 thread=1 00:11:31.523 invalidate=1 00:11:31.523 rw=write 00:11:31.523 time_based=1 00:11:31.523 runtime=1 00:11:31.523 ioengine=libaio 00:11:31.523 direct=1 00:11:31.523 bs=4096 00:11:31.523 iodepth=1 00:11:31.523 norandommap=0 00:11:31.523 numjobs=1 00:11:31.523 00:11:31.523 verify_dump=1 00:11:31.523 verify_backlog=512 00:11:31.523 verify_state_save=0 00:11:31.523 do_verify=1 00:11:31.523 verify=crc32c-intel 00:11:31.523 [job0] 00:11:31.523 filename=/dev/nvme0n1 00:11:31.523 [job1] 00:11:31.523 filename=/dev/nvme0n2 00:11:31.523 [job2] 00:11:31.523 filename=/dev/nvme0n3 00:11:31.523 [job3] 00:11:31.523 filename=/dev/nvme0n4 00:11:31.523 Could not set queue depth (nvme0n1) 00:11:31.523 Could not set queue depth (nvme0n2) 00:11:31.523 Could not set queue depth (nvme0n3) 00:11:31.523 Could not set queue depth (nvme0n4) 00:11:31.781 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.781 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.781 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.781 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.781 fio-3.35 00:11:31.781 Starting 4 threads 00:11:33.159 00:11:33.159 job0: (groupid=0, jobs=1): err= 0: pid=1884653: Fri Nov 29 12:55:32 2024 00:11:33.159 read: IOPS=1874, BW=7497KiB/s (7676kB/s)(7504KiB/1001msec) 00:11:33.159 slat (nsec): min=7232, max=45690, avg=8446.95, stdev=1909.58 00:11:33.159 clat (usec): min=174, max=41211, avg=335.75, stdev=2102.78 00:11:33.159 lat (usec): min=182, max=41222, avg=344.19, stdev=2103.40 00:11:33.159 clat percentiles (usec): 00:11:33.159 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:11:33.159 | 30.00th=[ 208], 40.00th=[ 219], 50.00th=[ 231], 60.00th=[ 239], 00:11:33.159 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 262], 00:11:33.159 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[41157], 99.95th=[41157], 00:11:33.159 | 99.99th=[41157] 00:11:33.159 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:33.159 slat (nsec): min=10749, max=46082, avg=12195.85, stdev=2071.35 00:11:33.159 clat (usec): min=110, max=399, avg=154.91, stdev=25.34 00:11:33.159 lat (usec): min=126, max=413, avg=167.10, stdev=25.63 00:11:33.159 clat percentiles (usec): 00:11:33.159 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:11:33.159 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 149], 00:11:33.159 | 70.00th=[ 157], 80.00th=[ 178], 90.00th=[ 194], 95.00th=[ 204], 00:11:33.159 | 99.00th=[ 229], 99.50th=[ 235], 99.90th=[ 281], 99.95th=[ 297], 00:11:33.159 | 99.99th=[ 400] 00:11:33.159 bw ( KiB/s): min= 8040, max= 8040, per=56.87%, avg=8040.00, stdev= 0.00, samples=1 00:11:33.159 iops : min= 2010, max= 2010, avg=2010.00, stdev= 0.00, samples=1 00:11:33.159 lat (usec) : 250=90.70%, 500=9.17% 00:11:33.159 lat (msec) : 50=0.13% 00:11:33.159 cpu : usr=3.30%, sys=6.30%, ctx=3927, majf=0, minf=1 00:11:33.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.159 issued rwts: total=1876,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.159 job1: (groupid=0, jobs=1): err= 0: pid=1884654: Fri Nov 29 12:55:32 2024 00:11:33.159 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:11:33.159 slat (nsec): min=10887, max=25124, avg=17416.45, stdev=4650.06 00:11:33.159 clat (usec): min=40856, max=41967, avg=41021.51, stdev=220.00 00:11:33.159 lat (usec): min=40879, max=41988, avg=41038.92, stdev=220.51 00:11:33.159 clat percentiles (usec): 00:11:33.159 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:33.159 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:33.159 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:33.159 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:33.159 | 99.99th=[42206] 00:11:33.159 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:11:33.159 slat (nsec): min=11087, max=62003, avg=14926.78, stdev=6812.68 00:11:33.159 clat (usec): min=133, max=278, avg=183.46, stdev=16.52 00:11:33.159 lat (usec): min=163, max=339, avg=198.39, stdev=17.47 00:11:33.159 clat percentiles (usec): 00:11:33.159 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:11:33.159 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:11:33.159 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 202], 95.00th=[ 206], 00:11:33.159 | 99.00th=[ 219], 99.50th=[ 258], 99.90th=[ 281], 99.95th=[ 281], 00:11:33.159 | 99.99th=[ 281] 00:11:33.159 bw ( KiB/s): min= 4096, max= 4096, per=28.97%, avg=4096.00, stdev= 0.00, samples=1 00:11:33.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:33.159 lat (usec) : 250=95.13%, 500=0.75% 00:11:33.159 lat (msec) : 50=4.12% 00:11:33.159 cpu : usr=0.70%, sys=0.60%, ctx=535, majf=0, minf=1 00:11:33.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.159 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.159 job2: (groupid=0, jobs=1): err= 0: pid=1884655: Fri Nov 29 12:55:32 2024 00:11:33.159 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:11:33.159 slat (nsec): min=9665, max=37334, avg=15774.36, stdev=6741.44 00:11:33.159 clat (usec): min=40844, max=42007, avg=41179.07, stdev=403.21 00:11:33.159 lat (usec): min=40861, max=42017, avg=41194.85, stdev=401.19 00:11:33.159 clat percentiles (usec): 00:11:33.159 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:33.159 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:33.159 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:33.159 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:33.159 | 99.99th=[42206] 00:11:33.159 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:11:33.159 slat (nsec): min=10986, max=57722, avg=14484.30, stdev=5762.12 00:11:33.159 clat (usec): min=157, max=345, avg=189.12, stdev=17.08 00:11:33.159 lat (usec): min=169, max=381, avg=203.61, stdev=18.68 00:11:33.159 clat percentiles (usec): 00:11:33.159 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 178], 00:11:33.159 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:11:33.159 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 210], 00:11:33.159 | 99.00th=[ 239], 99.50th=[ 297], 99.90th=[ 347], 99.95th=[ 347], 00:11:33.159 | 99.99th=[ 347] 00:11:33.159 bw ( KiB/s): min= 4096, max= 4096, per=28.97%, avg=4096.00, stdev= 0.00, samples=1 00:11:33.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:33.159 lat (usec) : 250=94.94%, 500=0.94% 00:11:33.159 lat (msec) : 50=4.12% 00:11:33.159 cpu : usr=0.79%, sys=0.59%, ctx=534, majf=0, minf=2 00:11:33.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.159 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.159 job3: (groupid=0, jobs=1): err= 0: pid=1884657: Fri Nov 29 12:55:32 2024 00:11:33.159 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:11:33.159 slat (nsec): min=10129, max=29081, avg=18925.64, stdev=5685.81 00:11:33.159 clat (usec): min=40811, max=44840, avg=41170.64, stdev=830.69 00:11:33.159 lat (usec): min=40834, max=44869, avg=41189.57, stdev=832.46 00:11:33.159 clat percentiles (usec): 00:11:33.159 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:33.159 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:33.159 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:33.159 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:11:33.159 | 99.99th=[44827] 00:11:33.159 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:11:33.159 slat (nsec): min=10354, max=59943, avg=12052.43, stdev=2834.62 00:11:33.159 clat (usec): min=147, max=400, avg=194.29, stdev=23.17 00:11:33.159 lat (usec): min=158, max=411, avg=206.35, stdev=24.08 00:11:33.159 clat percentiles (usec): 00:11:33.159 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:11:33.159 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:11:33.159 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 223], 00:11:33.159 | 99.00th=[ 289], 99.50th=[ 334], 99.90th=[ 400], 99.95th=[ 400], 00:11:33.159 | 99.99th=[ 400] 00:11:33.159 bw ( KiB/s): min= 4096, max= 4096, per=28.97%, avg=4096.00, stdev= 0.00, samples=1 00:11:33.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:33.159 lat (usec) : 250=93.82%, 500=2.06% 00:11:33.159 lat (msec) : 50=4.12% 00:11:33.159 cpu : usr=0.20%, sys=1.09%, ctx=535, majf=0, minf=1 00:11:33.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.160 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.160 00:11:33.160 Run status group 0 (all jobs): 00:11:33.160 READ: bw=7661KiB/s (7845kB/s), 86.8KiB/s-7497KiB/s (88.9kB/s-7676kB/s), io=7768KiB (7954kB), run=1001-1014msec 00:11:33.160 WRITE: bw=13.8MiB/s (14.5MB/s), 2020KiB/s-8184KiB/s (2068kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1014msec 00:11:33.160 00:11:33.160 Disk stats (read/write): 00:11:33.160 nvme0n1: ios=1562/1562, merge=0/0, ticks=1532/228, in_queue=1760, util=97.90% 00:11:33.160 nvme0n2: ios=45/512, merge=0/0, ticks=767/81, in_queue=848, util=87.49% 00:11:33.160 nvme0n3: ios=31/512, merge=0/0, ticks=1049/86, in_queue=1135, util=91.44% 00:11:33.160 nvme0n4: ios=18/512, merge=0/0, ticks=739/96, in_queue=835, util=89.68% 00:11:33.160 12:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:33.160 [global] 00:11:33.160 thread=1 00:11:33.160 invalidate=1 00:11:33.160 rw=randwrite 00:11:33.160 time_based=1 00:11:33.160 runtime=1 00:11:33.160 ioengine=libaio 00:11:33.160 direct=1 00:11:33.160 bs=4096 00:11:33.160 iodepth=1 00:11:33.160 norandommap=0 00:11:33.160 numjobs=1 00:11:33.160 00:11:33.160 verify_dump=1 00:11:33.160 verify_backlog=512 00:11:33.160 verify_state_save=0 00:11:33.160 do_verify=1 00:11:33.160 verify=crc32c-intel 00:11:33.160 [job0] 00:11:33.160 filename=/dev/nvme0n1 00:11:33.160 [job1] 00:11:33.160 filename=/dev/nvme0n2 00:11:33.160 [job2] 00:11:33.160 filename=/dev/nvme0n3 00:11:33.160 [job3] 00:11:33.160 filename=/dev/nvme0n4 00:11:33.160 Could not set queue depth (nvme0n1) 00:11:33.160 Could not set queue depth (nvme0n2) 00:11:33.160 Could not set queue depth (nvme0n3) 00:11:33.160 Could not set queue depth (nvme0n4) 00:11:33.419 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.419 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.419 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.419 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.419 fio-3.35 00:11:33.419 Starting 4 threads 00:11:34.797 00:11:34.797 job0: (groupid=0, jobs=1): err= 0: pid=1885028: Fri Nov 29 12:55:34 2024 00:11:34.797 read: IOPS=22, BW=90.1KiB/s (92.3kB/s)(92.0KiB/1021msec) 00:11:34.797 slat (nsec): min=9112, max=25603, avg=18273.35, stdev=4664.59 00:11:34.797 clat (usec): min=413, max=42047, avg=39374.90, stdev=8502.68 00:11:34.797 lat (usec): min=439, max=42068, avg=39393.17, stdev=8501.12 00:11:34.797 clat percentiles (usec): 00:11:34.797 | 1.00th=[ 412], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:34.797 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:34.797 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:11:34.797 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:34.797 | 99.99th=[42206] 00:11:34.797 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:11:34.797 slat (nsec): min=9209, max=36387, avg=11810.19, stdev=2381.33 00:11:34.797 clat (usec): min=133, max=323, avg=210.18, stdev=32.33 00:11:34.797 lat (usec): min=144, max=359, avg=221.99, stdev=32.59 00:11:34.797 clat percentiles (usec): 00:11:34.797 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 161], 20.00th=[ 180], 00:11:34.797 | 30.00th=[ 196], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 225], 00:11:34.797 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 253], 00:11:34.797 | 99.00th=[ 281], 99.50th=[ 310], 99.90th=[ 322], 99.95th=[ 322], 00:11:34.797 | 99.99th=[ 322] 00:11:34.797 bw ( KiB/s): min= 4096, max= 4096, per=40.84%, avg=4096.00, stdev= 0.00, samples=1 00:11:34.797 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:34.797 lat (usec) : 250=89.91%, 500=5.98% 00:11:34.797 lat (msec) : 50=4.11% 00:11:34.797 cpu : usr=0.20%, sys=0.69%, ctx=535, majf=0, minf=1 00:11:34.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.797 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.797 job1: (groupid=0, jobs=1): err= 0: pid=1885030: Fri Nov 29 12:55:34 2024 00:11:34.797 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:11:34.797 slat (nsec): min=9874, max=28483, avg=23536.95, stdev=3362.29 00:11:34.797 clat (usec): min=40566, max=42986, avg=41185.41, stdev=554.50 00:11:34.797 lat (usec): min=40576, max=43015, avg=41208.94, stdev=556.31 00:11:34.797 clat percentiles (usec): 00:11:34.797 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:34.797 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:34.797 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:34.797 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:34.797 | 99.99th=[42730] 00:11:34.797 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:11:34.797 slat (nsec): min=10516, max=35338, avg=12666.90, stdev=2267.23 00:11:34.797 clat (usec): min=152, max=370, avg=183.25, stdev=19.34 00:11:34.797 lat (usec): min=164, max=405, avg=195.92, stdev=20.09 00:11:34.797 clat percentiles (usec): 00:11:34.797 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:11:34.797 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:11:34.797 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 210], 00:11:34.797 | 99.00th=[ 245], 99.50th=[ 262], 99.90th=[ 371], 99.95th=[ 371], 00:11:34.797 | 99.99th=[ 371] 00:11:34.797 bw ( KiB/s): min= 4096, max= 4096, per=40.84%, avg=4096.00, stdev= 0.00, samples=1 00:11:34.797 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:34.797 lat (usec) : 250=95.32%, 500=0.56% 00:11:34.797 lat (msec) : 50=4.12% 00:11:34.797 cpu : usr=0.40%, sys=0.99%, ctx=536, majf=0, minf=1 00:11:34.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.797 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.797 job2: (groupid=0, jobs=1): err= 0: pid=1885032: Fri Nov 29 12:55:34 2024 00:11:34.797 read: IOPS=582, BW=2329KiB/s (2385kB/s)(2348KiB/1008msec) 00:11:34.797 slat (nsec): min=6869, max=26964, avg=8122.19, stdev=2765.84 00:11:34.797 clat (usec): min=174, max=42041, avg=1373.08, stdev=6759.68 00:11:34.797 lat (usec): min=182, max=42063, avg=1381.20, stdev=6762.04 00:11:34.797 clat percentiles (usec): 00:11:34.797 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:11:34.797 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:11:34.797 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 245], 95.00th=[ 258], 00:11:34.797 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:34.797 | 99.99th=[42206] 00:11:34.797 write: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec); 0 zone resets 00:11:34.797 slat (nsec): min=9192, max=43431, avg=11216.21, stdev=2305.81 00:11:34.797 clat (usec): min=124, max=353, avg=177.59, stdev=40.34 00:11:34.797 lat (usec): min=135, max=392, avg=188.80, stdev=41.53 00:11:34.797 clat percentiles (usec): 00:11:34.797 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:11:34.797 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 161], 60.00th=[ 180], 00:11:34.797 | 70.00th=[ 208], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 243], 00:11:34.797 | 99.00th=[ 265], 99.50th=[ 289], 99.90th=[ 334], 99.95th=[ 355], 00:11:34.797 | 99.99th=[ 355] 00:11:34.797 bw ( KiB/s): min= 8192, max= 8192, per=81.68%, avg=8192.00, stdev= 0.00, samples=1 00:11:34.797 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:34.797 lat (usec) : 250=95.41%, 500=3.54% 00:11:34.797 lat (msec) : 50=1.06% 00:11:34.797 cpu : usr=0.60%, sys=1.79%, ctx=1611, majf=0, minf=1 00:11:34.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.797 issued rwts: total=587,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.797 job3: (groupid=0, jobs=1): err= 0: pid=1885033: Fri Nov 29 12:55:34 2024 00:11:34.797 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:11:34.797 slat (nsec): min=9753, max=30251, avg=23685.77, stdev=3423.91 00:11:34.797 clat (usec): min=40868, max=42160, avg=41168.36, stdev=416.44 00:11:34.797 lat (usec): min=40891, max=42184, avg=41192.05, stdev=415.91 00:11:34.797 clat percentiles (usec): 00:11:34.797 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:34.797 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:34.797 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:34.797 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:34.797 | 99.99th=[42206] 00:11:34.797 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:11:34.797 slat (nsec): min=9572, max=48663, avg=11199.37, stdev=3670.08 00:11:34.797 clat (usec): min=140, max=251, avg=175.30, stdev=16.28 00:11:34.797 lat (usec): min=151, max=283, avg=186.50, stdev=17.61 00:11:34.797 clat percentiles (usec): 00:11:34.797 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 161], 00:11:34.797 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:11:34.797 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:11:34.797 | 99.00th=[ 219], 99.50th=[ 235], 99.90th=[ 251], 99.95th=[ 251], 00:11:34.797 | 99.99th=[ 251] 00:11:34.798 bw ( KiB/s): min= 4096, max= 4096, per=40.84%, avg=4096.00, stdev= 0.00, samples=1 00:11:34.798 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:34.798 lat (usec) : 250=95.69%, 500=0.19% 00:11:34.798 lat (msec) : 50=4.12% 00:11:34.798 cpu : usr=0.30%, sys=0.50%, ctx=538, majf=0, minf=1 00:11:34.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.798 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.798 00:11:34.798 Run status group 0 (all jobs): 00:11:34.798 READ: bw=2562KiB/s (2624kB/s), 86.9KiB/s-2329KiB/s (89.0kB/s-2385kB/s), io=2616KiB (2679kB), run=1004-1021msec 00:11:34.798 WRITE: bw=9.79MiB/s (10.3MB/s), 2006KiB/s-4063KiB/s (2054kB/s-4161kB/s), io=10.0MiB (10.5MB), run=1004-1021msec 00:11:34.798 00:11:34.798 Disk stats (read/write): 00:11:34.798 nvme0n1: ios=67/512, merge=0/0, ticks=726/106, in_queue=832, util=86.97% 00:11:34.798 nvme0n2: ios=68/512, merge=0/0, ticks=974/85, in_queue=1059, util=98.07% 00:11:34.798 nvme0n3: ios=596/1024, merge=0/0, ticks=955/177, in_queue=1132, util=91.27% 00:11:34.798 nvme0n4: ios=61/512, merge=0/0, ticks=1492/85, in_queue=1577, util=96.54% 00:11:34.798 12:55:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:34.798 [global] 00:11:34.798 thread=1 00:11:34.798 invalidate=1 00:11:34.798 rw=write 00:11:34.798 time_based=1 00:11:34.798 runtime=1 00:11:34.798 ioengine=libaio 00:11:34.798 direct=1 00:11:34.798 bs=4096 00:11:34.798 iodepth=128 00:11:34.798 norandommap=0 00:11:34.798 numjobs=1 00:11:34.798 00:11:34.798 verify_dump=1 00:11:34.798 verify_backlog=512 00:11:34.798 verify_state_save=0 00:11:34.798 do_verify=1 00:11:34.798 verify=crc32c-intel 00:11:34.798 [job0] 00:11:34.798 filename=/dev/nvme0n1 00:11:34.798 [job1] 00:11:34.798 filename=/dev/nvme0n2 00:11:34.798 [job2] 00:11:34.798 filename=/dev/nvme0n3 00:11:34.798 [job3] 00:11:34.798 filename=/dev/nvme0n4 00:11:34.798 Could not set queue depth (nvme0n1) 00:11:34.798 Could not set queue depth (nvme0n2) 00:11:34.798 Could not set queue depth (nvme0n3) 00:11:34.798 Could not set queue depth (nvme0n4) 00:11:35.055 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:35.055 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:35.055 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:35.055 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:35.055 fio-3.35 00:11:35.055 Starting 4 threads 00:11:36.430 00:11:36.430 job0: (groupid=0, jobs=1): err= 0: pid=1885426: Fri Nov 29 12:55:35 2024 00:11:36.430 read: IOPS=5504, BW=21.5MiB/s (22.5MB/s)(22.5MiB/1046msec) 00:11:36.430 slat (nsec): min=1257, max=9941.7k, avg=92837.63, stdev=659004.47 00:11:36.430 clat (usec): min=3636, max=61367, avg=12213.45, stdev=7163.65 00:11:36.430 lat (usec): min=3642, max=61370, avg=12306.29, stdev=7183.89 00:11:36.430 clat percentiles (usec): 00:11:36.430 | 1.00th=[ 4621], 5.00th=[ 7635], 10.00th=[ 8455], 20.00th=[ 9765], 00:11:36.430 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:11:36.430 | 70.00th=[11076], 80.00th=[14222], 90.00th=[16450], 95.00th=[17957], 00:11:36.430 | 99.00th=[56886], 99.50th=[59507], 99.90th=[60556], 99.95th=[61604], 00:11:36.430 | 99.99th=[61604] 00:11:36.430 write: IOPS=5873, BW=22.9MiB/s (24.1MB/s)(24.0MiB/1046msec); 0 zone resets 00:11:36.430 slat (usec): min=2, max=18686, avg=70.98, stdev=395.84 00:11:36.430 clat (usec): min=1756, max=61372, avg=10060.69, stdev=4313.95 00:11:36.430 lat (usec): min=1782, max=61376, avg=10131.67, stdev=4334.72 00:11:36.430 clat percentiles (usec): 00:11:36.430 | 1.00th=[ 3392], 5.00th=[ 4948], 10.00th=[ 6259], 20.00th=[ 8586], 00:11:36.430 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:11:36.430 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:11:36.430 | 99.00th=[35390], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:36.430 | 99.99th=[61604] 00:11:36.430 bw ( KiB/s): min=24560, max=24576, per=35.34%, avg=24568.00, stdev=11.31, samples=2 00:11:36.430 iops : min= 6140, max= 6144, avg=6142.00, stdev= 2.83, samples=2 00:11:36.430 lat (msec) : 2=0.02%, 4=1.28%, 10=30.65%, 20=65.90%, 50=1.09% 00:11:36.430 lat (msec) : 100=1.06% 00:11:36.430 cpu : usr=3.73%, sys=6.32%, ctx=794, majf=0, minf=1 00:11:36.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:36.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:36.430 issued rwts: total=5758,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.430 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:36.430 job1: (groupid=0, jobs=1): err= 0: pid=1885438: Fri Nov 29 12:55:35 2024 00:11:36.430 read: IOPS=5831, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1005msec) 00:11:36.431 slat (nsec): min=1349, max=10304k, avg=90809.54, stdev=677996.00 00:11:36.431 clat (usec): min=3616, max=20420, avg=11257.57, stdev=2492.33 00:11:36.431 lat (usec): min=3622, max=24561, avg=11348.38, stdev=2548.92 00:11:36.431 clat percentiles (usec): 00:11:36.431 | 1.00th=[ 4752], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[ 9765], 00:11:36.431 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:11:36.431 | 70.00th=[11338], 80.00th=[12387], 90.00th=[15139], 95.00th=[16909], 00:11:36.431 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19530], 99.95th=[20055], 00:11:36.431 | 99.99th=[20317] 00:11:36.431 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:11:36.431 slat (usec): min=2, max=23243, avg=70.70, stdev=499.97 00:11:36.431 clat (usec): min=1500, max=40946, avg=9995.86, stdev=3871.26 00:11:36.431 lat (usec): min=1513, max=40959, avg=10066.55, stdev=3902.90 00:11:36.431 clat percentiles (usec): 00:11:36.431 | 1.00th=[ 3359], 5.00th=[ 5342], 10.00th=[ 6652], 20.00th=[ 8717], 00:11:36.431 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:11:36.431 | 70.00th=[10421], 80.00th=[10552], 90.00th=[11076], 95.00th=[11338], 00:11:36.431 | 99.00th=[33162], 99.50th=[33162], 99.90th=[33162], 99.95th=[41157], 00:11:36.431 | 99.99th=[41157] 00:11:36.431 bw ( KiB/s): min=24576, max=24576, per=35.35%, avg=24576.00, stdev= 0.00, samples=2 00:11:36.431 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:11:36.431 lat (msec) : 2=0.06%, 4=1.47%, 10=32.29%, 20=65.09%, 50=1.08% 00:11:36.431 cpu : usr=5.68%, sys=5.68%, ctx=634, majf=0, minf=2 00:11:36.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:36.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:36.431 issued rwts: total=5861,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.431 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:36.431 job2: (groupid=0, jobs=1): err= 0: pid=1885456: Fri Nov 29 12:55:35 2024 00:11:36.431 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:11:36.431 slat (nsec): min=1401, max=13297k, avg=131939.25, stdev=885626.44 00:11:36.431 clat (usec): min=4253, max=33637, avg=15278.16, stdev=5278.78 00:11:36.431 lat (usec): min=4265, max=33640, avg=15410.10, stdev=5334.53 00:11:36.431 clat percentiles (usec): 00:11:36.431 | 1.00th=[ 5669], 5.00th=[10290], 10.00th=[11076], 20.00th=[12256], 00:11:36.431 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13566], 60.00th=[13960], 00:11:36.431 | 70.00th=[14877], 80.00th=[18220], 90.00th=[23462], 95.00th=[27919], 00:11:36.431 | 99.00th=[32637], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:11:36.431 | 99.99th=[33817] 00:11:36.431 write: IOPS=3641, BW=14.2MiB/s (14.9MB/s)(14.4MiB/1009msec); 0 zone resets 00:11:36.431 slat (usec): min=2, max=24631, avg=137.56, stdev=668.45 00:11:36.431 clat (usec): min=2915, max=36063, avg=19722.13, stdev=6115.64 00:11:36.431 lat (usec): min=2928, max=36080, avg=19859.69, stdev=6155.88 00:11:36.431 clat percentiles (usec): 00:11:36.431 | 1.00th=[ 4228], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[12125], 00:11:36.431 | 30.00th=[18744], 40.00th=[21890], 50.00th=[22414], 60.00th=[22938], 00:11:36.431 | 70.00th=[23200], 80.00th=[23462], 90.00th=[23987], 95.00th=[25822], 00:11:36.431 | 99.00th=[33817], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:11:36.431 | 99.99th=[35914] 00:11:36.431 bw ( KiB/s): min=12752, max=15920, per=20.62%, avg=14336.00, stdev=2240.11, samples=2 00:11:36.431 iops : min= 3188, max= 3980, avg=3584.00, stdev=560.03, samples=2 00:11:36.431 lat (msec) : 4=0.34%, 10=5.94%, 20=51.56%, 50=42.16% 00:11:36.431 cpu : usr=3.37%, sys=3.97%, ctx=447, majf=0, minf=1 00:11:36.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:36.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:36.431 issued rwts: total=3584,3674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.431 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:36.431 job3: (groupid=0, jobs=1): err= 0: pid=1885462: Fri Nov 29 12:55:35 2024 00:11:36.431 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:11:36.431 slat (nsec): min=1037, max=24463k, avg=268098.83, stdev=1744217.27 00:11:36.431 clat (usec): min=16069, max=83629, avg=32777.56, stdev=16636.34 00:11:36.431 lat (usec): min=17323, max=83637, avg=33045.66, stdev=16672.13 00:11:36.431 clat percentiles (usec): 00:11:36.431 | 1.00th=[17695], 5.00th=[20055], 10.00th=[21627], 20.00th=[22414], 00:11:36.431 | 30.00th=[22676], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:11:36.431 | 70.00th=[32637], 80.00th=[47449], 90.00th=[63701], 95.00th=[65274], 00:11:36.431 | 99.00th=[83362], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:11:36.431 | 99.99th=[83362] 00:11:36.431 write: IOPS=2202, BW=8811KiB/s (9023kB/s)(8864KiB/1006msec); 0 zone resets 00:11:36.431 slat (nsec): min=1918, max=11177k, avg=199388.59, stdev=889100.03 00:11:36.431 clat (usec): min=784, max=67329, avg=27139.65, stdev=16754.39 00:11:36.431 lat (usec): min=789, max=67338, avg=27339.04, stdev=16813.92 00:11:36.431 clat percentiles (usec): 00:11:36.431 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 8029], 20.00th=[10683], 00:11:36.431 | 30.00th=[21365], 40.00th=[22414], 50.00th=[22938], 60.00th=[23200], 00:11:36.431 | 70.00th=[25297], 80.00th=[42206], 90.00th=[60556], 95.00th=[62129], 00:11:36.431 | 99.00th=[64226], 99.50th=[65799], 99.90th=[67634], 99.95th=[67634], 00:11:36.431 | 99.99th=[67634] 00:11:36.431 bw ( KiB/s): min= 7040, max= 9664, per=12.01%, avg=8352.00, stdev=1855.45, samples=2 00:11:36.431 iops : min= 1760, max= 2416, avg=2088.00, stdev=463.86, samples=2 00:11:36.431 lat (usec) : 1000=0.09% 00:11:36.431 lat (msec) : 10=9.19%, 20=6.64%, 50=67.05%, 100=17.03% 00:11:36.431 cpu : usr=1.69%, sys=1.39%, ctx=288, majf=0, minf=1 00:11:36.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:11:36.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:36.431 issued rwts: total=2048,2216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.431 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:36.431 00:11:36.431 Run status group 0 (all jobs): 00:11:36.431 READ: bw=64.4MiB/s (67.6MB/s), 8143KiB/s-22.8MiB/s (8339kB/s-23.9MB/s), io=67.4MiB (70.7MB), run=1005-1046msec 00:11:36.431 WRITE: bw=67.9MiB/s (71.2MB/s), 8811KiB/s-23.9MiB/s (9023kB/s-25.0MB/s), io=71.0MiB (74.5MB), run=1005-1046msec 00:11:36.431 00:11:36.431 Disk stats (read/write): 00:11:36.431 nvme0n1: ios=4898/5120, merge=0/0, ticks=53263/47654, in_queue=100917, util=99.10% 00:11:36.431 nvme0n2: ios=5019/5120, merge=0/0, ticks=54576/47235, in_queue=101811, util=100.00% 00:11:36.431 nvme0n3: ios=2987/3072, merge=0/0, ticks=44084/56954, in_queue=101038, util=98.85% 00:11:36.431 nvme0n4: ios=1599/2048, merge=0/0, ticks=14734/15075, in_queue=29809, util=89.72% 00:11:36.431 12:55:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:36.431 [global] 00:11:36.431 thread=1 00:11:36.431 invalidate=1 00:11:36.431 rw=randwrite 00:11:36.431 time_based=1 00:11:36.431 runtime=1 00:11:36.431 ioengine=libaio 00:11:36.431 direct=1 00:11:36.431 bs=4096 00:11:36.431 iodepth=128 00:11:36.431 norandommap=0 00:11:36.431 numjobs=1 00:11:36.431 00:11:36.431 verify_dump=1 00:11:36.431 verify_backlog=512 00:11:36.431 verify_state_save=0 00:11:36.431 do_verify=1 00:11:36.431 verify=crc32c-intel 00:11:36.431 [job0] 00:11:36.431 filename=/dev/nvme0n1 00:11:36.431 [job1] 00:11:36.431 filename=/dev/nvme0n2 00:11:36.431 [job2] 00:11:36.431 filename=/dev/nvme0n3 00:11:36.431 [job3] 00:11:36.431 filename=/dev/nvme0n4 00:11:36.431 Could not set queue depth (nvme0n1) 00:11:36.431 Could not set queue depth (nvme0n2) 00:11:36.431 Could not set queue depth (nvme0n3) 00:11:36.431 Could not set queue depth (nvme0n4) 00:11:36.690 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:36.690 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:36.690 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:36.690 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:36.690 fio-3.35 00:11:36.690 Starting 4 threads 00:11:38.067 00:11:38.067 job0: (groupid=0, jobs=1): err= 0: pid=1885913: Fri Nov 29 12:55:37 2024 00:11:38.067 read: IOPS=2520, BW=9.84MiB/s (10.3MB/s)(9.91MiB/1007msec) 00:11:38.067 slat (nsec): min=1121, max=48324k, avg=210972.22, stdev=1626650.71 00:11:38.067 clat (usec): min=2485, max=87744, avg=25567.41, stdev=17261.41 00:11:38.067 lat (usec): min=4754, max=87752, avg=25778.38, stdev=17338.52 00:11:38.067 clat percentiles (usec): 00:11:38.067 | 1.00th=[ 8291], 5.00th=[ 9110], 10.00th=[ 9241], 20.00th=[10552], 00:11:38.067 | 30.00th=[16057], 40.00th=[16712], 50.00th=[21103], 60.00th=[25035], 00:11:38.067 | 70.00th=[29230], 80.00th=[34341], 90.00th=[49546], 95.00th=[65799], 00:11:38.067 | 99.00th=[87557], 99.50th=[87557], 99.90th=[87557], 99.95th=[87557], 00:11:38.067 | 99.99th=[87557] 00:11:38.067 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:11:38.067 slat (usec): min=2, max=28168, avg=173.55, stdev=1340.26 00:11:38.067 clat (usec): min=4100, max=80123, avg=23584.30, stdev=17392.14 00:11:38.067 lat (usec): min=4118, max=80130, avg=23757.85, stdev=17457.24 00:11:38.067 clat percentiles (usec): 00:11:38.067 | 1.00th=[ 5342], 5.00th=[ 7111], 10.00th=[ 8094], 20.00th=[ 9110], 00:11:38.067 | 30.00th=[13304], 40.00th=[16319], 50.00th=[17171], 60.00th=[19530], 00:11:38.067 | 70.00th=[24511], 80.00th=[31065], 90.00th=[51643], 95.00th=[63701], 00:11:38.067 | 99.00th=[80217], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:11:38.067 | 99.99th=[80217] 00:11:38.067 bw ( KiB/s): min= 8192, max=12288, per=17.11%, avg=10240.00, stdev=2896.31, samples=2 00:11:38.067 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:38.068 lat (msec) : 4=0.02%, 10=19.85%, 20=34.07%, 50=35.97%, 100=10.08% 00:11:38.068 cpu : usr=1.39%, sys=3.18%, ctx=191, majf=0, minf=1 00:11:38.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:38.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:38.068 issued rwts: total=2538,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:38.068 job1: (groupid=0, jobs=1): err= 0: pid=1885927: Fri Nov 29 12:55:37 2024 00:11:38.068 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:11:38.068 slat (nsec): min=1356, max=12873k, avg=106920.38, stdev=720783.12 00:11:38.068 clat (usec): min=3972, max=38001, avg=12480.10, stdev=5476.36 00:11:38.068 lat (usec): min=3978, max=38013, avg=12587.02, stdev=5543.20 00:11:38.068 clat percentiles (usec): 00:11:38.068 | 1.00th=[ 6587], 5.00th=[ 7701], 10.00th=[ 8356], 20.00th=[ 8979], 00:11:38.068 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10552], 00:11:38.068 | 70.00th=[11207], 80.00th=[16057], 90.00th=[22938], 95.00th=[24773], 00:11:38.068 | 99.00th=[29754], 99.50th=[31327], 99.90th=[33162], 99.95th=[33162], 00:11:38.068 | 99.99th=[38011] 00:11:38.068 write: IOPS=4314, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1010msec); 0 zone resets 00:11:38.068 slat (usec): min=2, max=8471, avg=124.12, stdev=569.42 00:11:38.068 clat (usec): min=2636, max=53076, avg=17677.71, stdev=9930.26 00:11:38.068 lat (usec): min=2644, max=53085, avg=17801.82, stdev=9998.91 00:11:38.068 clat percentiles (usec): 00:11:38.068 | 1.00th=[ 3884], 5.00th=[ 6980], 10.00th=[ 8225], 20.00th=[ 8586], 00:11:38.068 | 30.00th=[ 9896], 40.00th=[13042], 50.00th=[16319], 60.00th=[17433], 00:11:38.068 | 70.00th=[18482], 80.00th=[26084], 90.00th=[34341], 95.00th=[38011], 00:11:38.068 | 99.00th=[45351], 99.50th=[47973], 99.90th=[53216], 99.95th=[53216], 00:11:38.068 | 99.99th=[53216] 00:11:38.068 bw ( KiB/s): min=16384, max=17464, per=28.28%, avg=16924.00, stdev=763.68, samples=2 00:11:38.068 iops : min= 4096, max= 4366, avg=4231.00, stdev=190.92, samples=2 00:11:38.068 lat (msec) : 4=0.67%, 10=35.42%, 20=44.01%, 50=19.74%, 100=0.15% 00:11:38.068 cpu : usr=2.87%, sys=3.96%, ctx=483, majf=0, minf=1 00:11:38.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:38.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:38.068 issued rwts: total=4096,4358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:38.068 job2: (groupid=0, jobs=1): err= 0: pid=1885945: Fri Nov 29 12:55:37 2024 00:11:38.068 read: IOPS=2510, BW=9.81MiB/s (10.3MB/s)(9.88MiB/1007msec) 00:11:38.068 slat (nsec): min=1511, max=19173k, avg=227577.41, stdev=1357835.44 00:11:38.068 clat (usec): min=2821, max=95463, avg=28328.04, stdev=17897.98 00:11:38.068 lat (usec): min=8677, max=95468, avg=28555.62, stdev=17988.78 00:11:38.068 clat percentiles (usec): 00:11:38.068 | 1.00th=[ 8848], 5.00th=[11600], 10.00th=[12387], 20.00th=[16712], 00:11:38.068 | 30.00th=[17957], 40.00th=[20055], 50.00th=[22152], 60.00th=[26346], 00:11:38.068 | 70.00th=[28705], 80.00th=[38536], 90.00th=[49546], 95.00th=[68682], 00:11:38.068 | 99.00th=[93848], 99.50th=[95945], 99.90th=[95945], 99.95th=[95945], 00:11:38.068 | 99.99th=[95945] 00:11:38.068 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:11:38.068 slat (nsec): min=1933, max=22810k, avg=161085.15, stdev=1096535.69 00:11:38.068 clat (usec): min=6614, max=59147, avg=21569.94, stdev=9079.78 00:11:38.068 lat (usec): min=6622, max=59177, avg=21731.03, stdev=9147.73 00:11:38.068 clat percentiles (usec): 00:11:38.068 | 1.00th=[ 6652], 5.00th=[10814], 10.00th=[11600], 20.00th=[16319], 00:11:38.068 | 30.00th=[17171], 40.00th=[17433], 50.00th=[18744], 60.00th=[20841], 00:11:38.068 | 70.00th=[23462], 80.00th=[26870], 90.00th=[36963], 95.00th=[44827], 00:11:38.068 | 99.00th=[45876], 99.50th=[45876], 99.90th=[48497], 99.95th=[50594], 00:11:38.068 | 99.99th=[58983] 00:11:38.068 bw ( KiB/s): min= 8192, max=12288, per=17.11%, avg=10240.00, stdev=2896.31, samples=2 00:11:38.068 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:38.068 lat (msec) : 4=0.02%, 10=2.79%, 20=45.11%, 50=47.44%, 100=4.64% 00:11:38.068 cpu : usr=2.29%, sys=2.49%, ctx=216, majf=0, minf=1 00:11:38.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:38.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:38.068 issued rwts: total=2528,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:38.068 job3: (groupid=0, jobs=1): err= 0: pid=1885950: Fri Nov 29 12:55:37 2024 00:11:38.068 read: IOPS=5167, BW=20.2MiB/s (21.2MB/s)(20.4MiB/1009msec) 00:11:38.068 slat (nsec): min=1413, max=11569k, avg=92843.71, stdev=658850.17 00:11:38.068 clat (usec): min=3312, max=25998, avg=11752.23, stdev=3319.40 00:11:38.068 lat (usec): min=3324, max=26008, avg=11845.07, stdev=3367.69 00:11:38.068 clat percentiles (usec): 00:11:38.068 | 1.00th=[ 4555], 5.00th=[ 7570], 10.00th=[ 8455], 20.00th=[ 9241], 00:11:38.068 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[12387], 00:11:38.068 | 70.00th=[13566], 80.00th=[14484], 90.00th=[16188], 95.00th=[17695], 00:11:38.068 | 99.00th=[21365], 99.50th=[23725], 99.90th=[25035], 99.95th=[26084], 00:11:38.068 | 99.99th=[26084] 00:11:38.068 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:11:38.068 slat (usec): min=2, max=10387, avg=85.24, stdev=513.00 00:11:38.068 clat (usec): min=1611, max=69308, avg=11811.53, stdev=8366.99 00:11:38.068 lat (usec): min=1626, max=69320, avg=11896.77, stdev=8420.89 00:11:38.068 clat percentiles (usec): 00:11:38.068 | 1.00th=[ 3359], 5.00th=[ 5342], 10.00th=[ 7242], 20.00th=[ 8717], 00:11:38.068 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:11:38.068 | 70.00th=[10683], 80.00th=[13304], 90.00th=[17171], 95.00th=[23200], 00:11:38.068 | 99.00th=[57934], 99.50th=[65799], 99.90th=[69731], 99.95th=[69731], 00:11:38.068 | 99.99th=[69731] 00:11:38.068 bw ( KiB/s): min=20616, max=24176, per=37.43%, avg=22396.00, stdev=2517.30, samples=2 00:11:38.068 iops : min= 5154, max= 6044, avg=5599.00, stdev=629.33, samples=2 00:11:38.068 lat (msec) : 2=0.03%, 4=1.24%, 10=54.46%, 20=40.38%, 50=3.01% 00:11:38.068 lat (msec) : 100=0.88% 00:11:38.068 cpu : usr=3.77%, sys=7.84%, ctx=612, majf=0, minf=2 00:11:38.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:38.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:38.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:38.068 issued rwts: total=5214,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:38.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:38.068 00:11:38.068 Run status group 0 (all jobs): 00:11:38.068 READ: bw=55.6MiB/s (58.3MB/s), 9.81MiB/s-20.2MiB/s (10.3MB/s-21.2MB/s), io=56.2MiB (58.9MB), run=1007-1010msec 00:11:38.068 WRITE: bw=58.4MiB/s (61.3MB/s), 9.93MiB/s-21.8MiB/s (10.4MB/s-22.9MB/s), io=59.0MiB (61.9MB), run=1007-1010msec 00:11:38.068 00:11:38.068 Disk stats (read/write): 00:11:38.068 nvme0n1: ios=2098/2087, merge=0/0, ticks=20473/15447, in_queue=35920, util=87.07% 00:11:38.068 nvme0n2: ios=3434/3584, merge=0/0, ticks=42721/63054, in_queue=105775, util=87.11% 00:11:38.068 nvme0n3: ios=2048/2311, merge=0/0, ticks=23703/18870, in_queue=42573, util=88.54% 00:11:38.068 nvme0n4: ios=4650/4743, merge=0/0, ticks=51494/51225, in_queue=102719, util=99.05% 00:11:38.068 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:38.068 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1886048 00:11:38.068 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:38.068 12:55:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:38.068 [global] 00:11:38.068 thread=1 00:11:38.068 invalidate=1 00:11:38.068 rw=read 00:11:38.068 time_based=1 00:11:38.068 runtime=10 00:11:38.068 ioengine=libaio 00:11:38.068 direct=1 00:11:38.068 bs=4096 00:11:38.068 iodepth=1 00:11:38.068 norandommap=1 00:11:38.068 numjobs=1 00:11:38.068 00:11:38.068 [job0] 00:11:38.068 filename=/dev/nvme0n1 00:11:38.068 [job1] 00:11:38.068 filename=/dev/nvme0n2 00:11:38.068 [job2] 00:11:38.068 filename=/dev/nvme0n3 00:11:38.068 [job3] 00:11:38.068 filename=/dev/nvme0n4 00:11:38.068 Could not set queue depth (nvme0n1) 00:11:38.068 Could not set queue depth (nvme0n2) 00:11:38.068 Could not set queue depth (nvme0n3) 00:11:38.068 Could not set queue depth (nvme0n4) 00:11:38.068 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.068 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.068 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.068 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:38.068 fio-3.35 00:11:38.068 Starting 4 threads 00:11:41.358 12:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:41.358 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=28110848, buflen=4096 00:11:41.358 fio: pid=1886378, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:41.358 12:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:41.358 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=311296, buflen=4096 00:11:41.358 fio: pid=1886377, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:41.358 12:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:41.358 12:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:41.358 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5865472, buflen=4096 00:11:41.358 fio: pid=1886375, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:41.358 12:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:41.358 12:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:41.618 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=46854144, buflen=4096 00:11:41.618 fio: pid=1886376, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:41.618 12:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:41.618 12:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:41.618 00:11:41.618 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1886375: Fri Nov 29 12:55:41 2024 00:11:41.618 read: IOPS=454, BW=1816KiB/s (1859kB/s)(5728KiB/3155msec) 00:11:41.618 slat (nsec): min=7200, max=60680, avg=9549.38, stdev=2790.27 00:11:41.618 clat (usec): min=184, max=42029, avg=2175.88, stdev=8691.54 00:11:41.618 lat (usec): min=191, max=42039, avg=2185.43, stdev=8692.71 00:11:41.618 clat percentiles (usec): 00:11:41.618 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:11:41.618 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:11:41.618 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 445], 00:11:41.618 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:41.618 | 99.99th=[42206] 00:11:41.618 bw ( KiB/s): min= 96, max= 8033, per=6.32%, avg=1485.50, stdev=3211.21, samples=6 00:11:41.618 iops : min= 24, max= 2008, avg=371.33, stdev=802.70, samples=6 00:11:41.618 lat (usec) : 250=82.21%, 500=12.91% 00:11:41.618 lat (msec) : 2=0.07%, 50=4.75% 00:11:41.618 cpu : usr=0.51%, sys=0.54%, ctx=1437, majf=0, minf=1 00:11:41.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:41.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.618 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.618 issued rwts: total=1433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:41.619 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1886376: Fri Nov 29 12:55:41 2024 00:11:41.619 read: IOPS=3394, BW=13.3MiB/s (13.9MB/s)(44.7MiB/3370msec) 00:11:41.619 slat (usec): min=6, max=11569, avg=11.26, stdev=173.17 00:11:41.619 clat (usec): min=183, max=41625, avg=279.51, stdev=396.11 00:11:41.619 lat (usec): min=193, max=41632, avg=290.77, stdev=433.38 00:11:41.619 clat percentiles (usec): 00:11:41.619 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:11:41.619 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:11:41.619 | 70.00th=[ 255], 80.00th=[ 400], 90.00th=[ 412], 95.00th=[ 420], 00:11:41.619 | 99.00th=[ 441], 99.50th=[ 449], 99.90th=[ 465], 99.95th=[ 515], 00:11:41.619 | 99.99th=[ 2212] 00:11:41.619 bw ( KiB/s): min= 9544, max=17528, per=57.08%, avg=13421.33, stdev=3613.44, samples=6 00:11:41.619 iops : min= 2386, max= 4382, avg=3355.33, stdev=903.36, samples=6 00:11:41.619 lat (usec) : 250=67.57%, 500=32.37%, 750=0.01% 00:11:41.619 lat (msec) : 2=0.03%, 4=0.01%, 50=0.01% 00:11:41.619 cpu : usr=1.75%, sys=5.58%, ctx=11444, majf=0, minf=1 00:11:41.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:41.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.619 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.619 issued rwts: total=11440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:41.619 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1886377: Fri Nov 29 12:55:41 2024 00:11:41.619 read: IOPS=26, BW=103KiB/s (106kB/s)(304KiB/2945msec) 00:11:41.619 slat (nsec): min=8093, max=74524, avg=16317.45, stdev=9471.07 00:11:41.619 clat (usec): min=346, max=42095, avg=38451.87, stdev=10154.77 00:11:41.619 lat (usec): min=373, max=42104, avg=38468.10, stdev=10152.47 00:11:41.619 clat percentiles (usec): 00:11:41.619 | 1.00th=[ 347], 5.00th=[ 529], 10.00th=[40633], 20.00th=[41157], 00:11:41.619 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:41.619 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:41.619 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:41.619 | 99.99th=[42206] 00:11:41.619 bw ( KiB/s): min= 96, max= 104, per=0.43%, avg=100.80, stdev= 4.38, samples=5 00:11:41.619 iops : min= 24, max= 26, avg=25.20, stdev= 1.10, samples=5 00:11:41.619 lat (usec) : 500=3.90%, 750=2.60% 00:11:41.619 lat (msec) : 50=92.21% 00:11:41.619 cpu : usr=0.07%, sys=0.00%, ctx=78, majf=0, minf=2 00:11:41.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:41.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.619 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.619 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:41.619 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1886378: Fri Nov 29 12:55:41 2024 00:11:41.619 read: IOPS=2518, BW=9.84MiB/s (10.3MB/s)(26.8MiB/2725msec) 00:11:41.619 slat (nsec): min=8151, max=42166, avg=9684.17, stdev=1736.38 00:11:41.619 clat (usec): min=201, max=41033, avg=381.32, stdev=2312.19 00:11:41.619 lat (usec): min=211, max=41054, avg=391.01, stdev=2312.84 00:11:41.619 clat percentiles (usec): 00:11:41.619 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:11:41.619 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:11:41.619 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:11:41.619 | 99.00th=[ 297], 99.50th=[ 453], 99.90th=[41157], 99.95th=[41157], 00:11:41.619 | 99.99th=[41157] 00:11:41.619 bw ( KiB/s): min= 96, max=15512, per=41.76%, avg=9820.80, stdev=6449.82, samples=5 00:11:41.619 iops : min= 24, max= 3878, avg=2455.20, stdev=1612.46, samples=5 00:11:41.619 lat (usec) : 250=60.11%, 500=39.47%, 750=0.04% 00:11:41.619 lat (msec) : 2=0.01%, 4=0.01%, 20=0.01%, 50=0.32% 00:11:41.619 cpu : usr=1.51%, sys=4.52%, ctx=6864, majf=0, minf=2 00:11:41.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:41.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.619 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.619 issued rwts: total=6864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:41.619 00:11:41.619 Run status group 0 (all jobs): 00:11:41.619 READ: bw=23.0MiB/s (24.1MB/s), 103KiB/s-13.3MiB/s (106kB/s-13.9MB/s), io=77.4MiB (81.1MB), run=2725-3370msec 00:11:41.619 00:11:41.619 Disk stats (read/write): 00:11:41.619 nvme0n1: ios=1343/0, merge=0/0, ticks=3778/0, in_queue=3778, util=99.17% 00:11:41.619 nvme0n2: ios=11439/0, merge=0/0, ticks=3062/0, in_queue=3062, util=95.38% 00:11:41.619 nvme0n3: ios=74/0, merge=0/0, ticks=2840/0, in_queue=2840, util=96.52% 00:11:41.619 nvme0n4: ios=6508/0, merge=0/0, ticks=2430/0, in_queue=2430, util=96.45% 00:11:41.878 12:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:41.878 12:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:42.137 12:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:42.137 12:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:42.396 12:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:42.396 12:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:42.396 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:42.396 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:42.655 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:42.655 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1886048 00:11:42.655 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:42.655 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.914 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.914 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:42.914 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:42.914 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.914 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.914 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:42.914 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:42.914 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:42.914 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:42.914 nvmf hotplug test: fio failed as expected 00:11:42.914 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.914 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.174 rmmod nvme_tcp 00:11:43.174 rmmod nvme_fabrics 00:11:43.174 rmmod nvme_keyring 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1883299 ']' 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1883299 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1883299 ']' 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1883299 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1883299 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1883299' 00:11:43.174 killing process with pid 1883299 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1883299 00:11:43.174 12:55:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1883299 00:11:43.434 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.434 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.434 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.434 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:43.434 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:43.434 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.434 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.434 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.434 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.434 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.434 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.434 12:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.340 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.340 00:11:45.340 real 0m26.486s 00:11:45.340 user 1m47.276s 00:11:45.340 sys 0m7.969s 00:11:45.340 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.340 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.340 ************************************ 00:11:45.340 END TEST nvmf_fio_target 00:11:45.340 ************************************ 00:11:45.340 12:55:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:45.340 12:55:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:45.340 12:55:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.340 12:55:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:45.598 ************************************ 00:11:45.598 START TEST nvmf_bdevio 00:11:45.598 ************************************ 00:11:45.598 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:45.598 * Looking for test storage... 00:11:45.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.598 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:45.598 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:45.598 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:45.598 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:45.598 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.598 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.598 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.598 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:45.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.599 --rc genhtml_branch_coverage=1 00:11:45.599 --rc genhtml_function_coverage=1 00:11:45.599 --rc genhtml_legend=1 00:11:45.599 --rc geninfo_all_blocks=1 00:11:45.599 --rc geninfo_unexecuted_blocks=1 00:11:45.599 00:11:45.599 ' 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:45.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.599 --rc genhtml_branch_coverage=1 00:11:45.599 --rc genhtml_function_coverage=1 00:11:45.599 --rc genhtml_legend=1 00:11:45.599 --rc geninfo_all_blocks=1 00:11:45.599 --rc geninfo_unexecuted_blocks=1 00:11:45.599 00:11:45.599 ' 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:45.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.599 --rc genhtml_branch_coverage=1 00:11:45.599 --rc genhtml_function_coverage=1 00:11:45.599 --rc genhtml_legend=1 00:11:45.599 --rc geninfo_all_blocks=1 00:11:45.599 --rc geninfo_unexecuted_blocks=1 00:11:45.599 00:11:45.599 ' 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:45.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.599 --rc genhtml_branch_coverage=1 00:11:45.599 --rc genhtml_function_coverage=1 00:11:45.599 --rc genhtml_legend=1 00:11:45.599 --rc geninfo_all_blocks=1 00:11:45.599 --rc geninfo_unexecuted_blocks=1 00:11:45.599 00:11:45.599 ' 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.599 12:55:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:52.169 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:52.169 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:52.169 Found net devices under 0000:86:00.0: cvl_0_0 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.169 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:52.170 Found net devices under 0000:86:00.1: cvl_0_1 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:11:52.170 00:11:52.170 --- 10.0.0.2 ping statistics --- 00:11:52.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.170 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:11:52.170 00:11:52.170 --- 10.0.0.1 ping statistics --- 00:11:52.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.170 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:52.170 12:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1890621 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1890621 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1890621 ']' 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:52.170 [2024-11-29 12:55:51.067932] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:11:52.170 [2024-11-29 12:55:51.067992] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.170 [2024-11-29 12:55:51.135751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.170 [2024-11-29 12:55:51.179451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.170 [2024-11-29 12:55:51.179489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.170 [2024-11-29 12:55:51.179496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.170 [2024-11-29 12:55:51.179502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.170 [2024-11-29 12:55:51.179508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.170 [2024-11-29 12:55:51.180989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:52.170 [2024-11-29 12:55:51.181110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:52.170 [2024-11-29 12:55:51.181218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.170 [2024-11-29 12:55:51.181219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:52.170 [2024-11-29 12:55:51.319158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:52.170 Malloc0 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:52.170 [2024-11-29 12:55:51.389475] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:52.170 { 00:11:52.170 "params": { 00:11:52.170 "name": "Nvme$subsystem", 00:11:52.170 "trtype": "$TEST_TRANSPORT", 00:11:52.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.170 "adrfam": "ipv4", 00:11:52.170 "trsvcid": "$NVMF_PORT", 00:11:52.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.170 "hdgst": ${hdgst:-false}, 00:11:52.170 "ddgst": ${ddgst:-false} 00:11:52.170 }, 00:11:52.170 "method": "bdev_nvme_attach_controller" 00:11:52.170 } 00:11:52.170 EOF 00:11:52.170 )") 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:52.170 12:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:52.170 "params": { 00:11:52.170 "name": "Nvme1", 00:11:52.170 "trtype": "tcp", 00:11:52.170 "traddr": "10.0.0.2", 00:11:52.170 "adrfam": "ipv4", 00:11:52.170 "trsvcid": "4420", 00:11:52.170 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.170 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.170 "hdgst": false, 00:11:52.170 "ddgst": false 00:11:52.170 }, 00:11:52.170 "method": "bdev_nvme_attach_controller" 00:11:52.170 }' 00:11:52.170 [2024-11-29 12:55:51.441156] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:11:52.170 [2024-11-29 12:55:51.441199] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890660 ] 00:11:52.170 [2024-11-29 12:55:51.506769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:52.170 [2024-11-29 12:55:51.551485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.170 [2024-11-29 12:55:51.551582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.170 [2024-11-29 12:55:51.551584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.170 I/O targets: 00:11:52.170 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:52.170 00:11:52.170 00:11:52.170 CUnit - A unit testing framework for C - Version 2.1-3 00:11:52.170 http://cunit.sourceforge.net/ 00:11:52.170 00:11:52.170 00:11:52.170 Suite: bdevio tests on: Nvme1n1 00:11:52.170 Test: blockdev write read block ...passed 00:11:52.170 Test: blockdev write zeroes read block ...passed 00:11:52.170 Test: blockdev write zeroes read no split ...passed 00:11:52.429 Test: blockdev write zeroes read split ...passed 00:11:52.429 Test: blockdev write zeroes read split partial ...passed 00:11:52.429 Test: blockdev reset ...[2024-11-29 12:55:52.020139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:52.429 [2024-11-29 12:55:52.020215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a6350 (9): Bad file descriptor 00:11:52.429 [2024-11-29 12:55:52.077607] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:52.429 passed 00:11:52.429 Test: blockdev write read 8 blocks ...passed 00:11:52.429 Test: blockdev write read size > 128k ...passed 00:11:52.429 Test: blockdev write read invalid size ...passed 00:11:52.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:52.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:52.429 Test: blockdev write read max offset ...passed 00:11:52.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:52.688 Test: blockdev writev readv 8 blocks ...passed 00:11:52.688 Test: blockdev writev readv 30 x 1block ...passed 00:11:52.688 Test: blockdev writev readv block ...passed 00:11:52.688 Test: blockdev writev readv size > 128k ...passed 00:11:52.688 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:52.688 Test: blockdev comparev and writev ...[2024-11-29 12:55:52.369688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:52.688 [2024-11-29 12:55:52.369719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:52.688 [2024-11-29 12:55:52.369734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:52.688 [2024-11-29 12:55:52.369742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:52.688 [2024-11-29 12:55:52.369997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:52.688 [2024-11-29 12:55:52.370013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:52.688 [2024-11-29 12:55:52.370025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:52.688 [2024-11-29 12:55:52.370034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:52.688 [2024-11-29 12:55:52.370278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:52.688 [2024-11-29 12:55:52.370289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:52.688 [2024-11-29 12:55:52.370301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:52.688 [2024-11-29 12:55:52.370308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:52.688 [2024-11-29 12:55:52.370548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:52.688 [2024-11-29 12:55:52.370559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:52.688 [2024-11-29 12:55:52.370572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:52.688 [2024-11-29 12:55:52.370579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:52.688 passed 00:11:52.688 Test: blockdev nvme passthru rw ...passed 00:11:52.688 Test: blockdev nvme passthru vendor specific ...[2024-11-29 12:55:52.452393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:52.688 [2024-11-29 12:55:52.452411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:52.688 [2024-11-29 12:55:52.452530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:52.688 [2024-11-29 12:55:52.452540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:52.688 [2024-11-29 12:55:52.452648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:52.688 [2024-11-29 12:55:52.452657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:52.688 [2024-11-29 12:55:52.452766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:52.688 [2024-11-29 12:55:52.452775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:52.688 passed 00:11:52.688 Test: blockdev nvme admin passthru ...passed 00:11:52.948 Test: blockdev copy ...passed 00:11:52.948 00:11:52.948 Run Summary: Type Total Ran Passed Failed Inactive 00:11:52.948 suites 1 1 n/a 0 0 00:11:52.948 tests 23 23 23 0 0 00:11:52.948 asserts 152 152 152 0 n/a 00:11:52.948 00:11:52.948 Elapsed time = 1.291 seconds 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.948 rmmod nvme_tcp 00:11:52.948 rmmod nvme_fabrics 00:11:52.948 rmmod nvme_keyring 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1890621 ']' 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1890621 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1890621 ']' 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1890621 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.948 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1890621 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1890621' 00:11:53.207 killing process with pid 1890621 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1890621 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1890621 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.207 12:55:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.740 00:11:55.740 real 0m9.874s 00:11:55.740 user 0m11.317s 00:11:55.740 sys 0m4.753s 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.740 ************************************ 00:11:55.740 END TEST nvmf_bdevio 00:11:55.740 ************************************ 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:55.740 00:11:55.740 real 4m28.123s 00:11:55.740 user 10m23.912s 00:11:55.740 sys 1m34.492s 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:55.740 ************************************ 00:11:55.740 END TEST nvmf_target_core 00:11:55.740 ************************************ 00:11:55.740 12:55:55 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:55.740 12:55:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:55.740 12:55:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.740 12:55:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:55.740 ************************************ 00:11:55.740 START TEST nvmf_target_extra 00:11:55.740 ************************************ 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:55.740 * Looking for test storage... 00:11:55.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:55.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.740 --rc genhtml_branch_coverage=1 00:11:55.740 --rc genhtml_function_coverage=1 00:11:55.740 --rc genhtml_legend=1 00:11:55.740 --rc geninfo_all_blocks=1 00:11:55.740 --rc geninfo_unexecuted_blocks=1 00:11:55.740 00:11:55.740 ' 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:55.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.740 --rc genhtml_branch_coverage=1 00:11:55.740 --rc genhtml_function_coverage=1 00:11:55.740 --rc genhtml_legend=1 00:11:55.740 --rc geninfo_all_blocks=1 00:11:55.740 --rc geninfo_unexecuted_blocks=1 00:11:55.740 00:11:55.740 ' 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:55.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.740 --rc genhtml_branch_coverage=1 00:11:55.740 --rc genhtml_function_coverage=1 00:11:55.740 --rc genhtml_legend=1 00:11:55.740 --rc geninfo_all_blocks=1 00:11:55.740 --rc geninfo_unexecuted_blocks=1 00:11:55.740 00:11:55.740 ' 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:55.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.740 --rc genhtml_branch_coverage=1 00:11:55.740 --rc genhtml_function_coverage=1 00:11:55.740 --rc genhtml_legend=1 00:11:55.740 --rc geninfo_all_blocks=1 00:11:55.740 --rc geninfo_unexecuted_blocks=1 00:11:55.740 00:11:55.740 ' 00:11:55.740 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.741 ************************************ 00:11:55.741 START TEST nvmf_example 00:11:55.741 ************************************ 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:55.741 * Looking for test storage... 00:11:55.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:55.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.741 --rc genhtml_branch_coverage=1 00:11:55.741 --rc genhtml_function_coverage=1 00:11:55.741 --rc genhtml_legend=1 00:11:55.741 --rc geninfo_all_blocks=1 00:11:55.741 --rc geninfo_unexecuted_blocks=1 00:11:55.741 00:11:55.741 ' 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:55.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.741 --rc genhtml_branch_coverage=1 00:11:55.741 --rc genhtml_function_coverage=1 00:11:55.741 --rc genhtml_legend=1 00:11:55.741 --rc geninfo_all_blocks=1 00:11:55.741 --rc geninfo_unexecuted_blocks=1 00:11:55.741 00:11:55.741 ' 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:55.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.741 --rc genhtml_branch_coverage=1 00:11:55.741 --rc genhtml_function_coverage=1 00:11:55.741 --rc genhtml_legend=1 00:11:55.741 --rc geninfo_all_blocks=1 00:11:55.741 --rc geninfo_unexecuted_blocks=1 00:11:55.741 00:11:55.741 ' 00:11:55.741 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:55.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.741 --rc genhtml_branch_coverage=1 00:11:55.741 --rc genhtml_function_coverage=1 00:11:55.741 --rc genhtml_legend=1 00:11:55.741 --rc geninfo_all_blocks=1 00:11:55.741 --rc geninfo_unexecuted_blocks=1 00:11:55.741 00:11:55.741 ' 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.742 12:55:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:01.011 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.011 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:01.012 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:01.012 Found net devices under 0000:86:00.0: cvl_0_0 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:01.012 Found net devices under 0000:86:00.1: cvl_0_1 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:01.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:12:01.012 00:12:01.012 --- 10.0.0.2 ping statistics --- 00:12:01.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.012 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:12:01.012 00:12:01.012 --- 10.0.0.1 ping statistics --- 00:12:01.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.012 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1894464 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1894464 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1894464 ']' 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.012 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.950 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.951 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:01.951 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:14.341 Initializing NVMe Controllers 00:12:14.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:14.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:14.341 Initialization complete. Launching workers. 00:12:14.341 ======================================================== 00:12:14.341 Latency(us) 00:12:14.341 Device Information : IOPS MiB/s Average min max 00:12:14.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17638.63 68.90 3627.77 712.00 15610.01 00:12:14.341 ======================================================== 00:12:14.341 Total : 17638.63 68.90 3627.77 712.00 15610.01 00:12:14.341 00:12:14.341 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:14.341 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:14.341 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:14.341 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.341 rmmod nvme_tcp 00:12:14.341 rmmod nvme_fabrics 00:12:14.341 rmmod nvme_keyring 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1894464 ']' 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1894464 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1894464 ']' 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1894464 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1894464 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1894464' 00:12:14.341 killing process with pid 1894464 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1894464 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1894464 00:12:14.341 nvmf threads initialize successfully 00:12:14.341 bdev subsystem init successfully 00:12:14.341 created a nvmf target service 00:12:14.341 create targets's poll groups done 00:12:14.341 all subsystems of target started 00:12:14.341 nvmf target is running 00:12:14.341 all subsystems of target stopped 00:12:14.341 destroy targets's poll groups done 00:12:14.341 destroyed the nvmf target service 00:12:14.341 bdev subsystem finish successfully 00:12:14.341 nvmf threads destroy successfully 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.341 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.600 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.600 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:14.600 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:14.600 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:14.859 00:12:14.859 real 0m19.084s 00:12:14.859 user 0m46.075s 00:12:14.859 sys 0m5.541s 00:12:14.859 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.859 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:14.859 ************************************ 00:12:14.859 END TEST nvmf_example 00:12:14.859 ************************************ 00:12:14.859 12:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:14.859 12:56:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.859 12:56:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.859 12:56:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.859 ************************************ 00:12:14.859 START TEST nvmf_filesystem 00:12:14.859 ************************************ 00:12:14.859 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:14.859 * Looking for test storage... 00:12:14.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:14.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.860 --rc genhtml_branch_coverage=1 00:12:14.860 --rc genhtml_function_coverage=1 00:12:14.860 --rc genhtml_legend=1 00:12:14.860 --rc geninfo_all_blocks=1 00:12:14.860 --rc geninfo_unexecuted_blocks=1 00:12:14.860 00:12:14.860 ' 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:14.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.860 --rc genhtml_branch_coverage=1 00:12:14.860 --rc genhtml_function_coverage=1 00:12:14.860 --rc genhtml_legend=1 00:12:14.860 --rc geninfo_all_blocks=1 00:12:14.860 --rc geninfo_unexecuted_blocks=1 00:12:14.860 00:12:14.860 ' 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:14.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.860 --rc genhtml_branch_coverage=1 00:12:14.860 --rc genhtml_function_coverage=1 00:12:14.860 --rc genhtml_legend=1 00:12:14.860 --rc geninfo_all_blocks=1 00:12:14.860 --rc geninfo_unexecuted_blocks=1 00:12:14.860 00:12:14.860 ' 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:14.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.860 --rc genhtml_branch_coverage=1 00:12:14.860 --rc genhtml_function_coverage=1 00:12:14.860 --rc genhtml_legend=1 00:12:14.860 --rc geninfo_all_blocks=1 00:12:14.860 --rc geninfo_unexecuted_blocks=1 00:12:14.860 00:12:14.860 ' 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:14.860 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:14.861 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:15.124 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:15.124 #define SPDK_CONFIG_H 00:12:15.124 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:15.124 #define SPDK_CONFIG_APPS 1 00:12:15.124 #define SPDK_CONFIG_ARCH native 00:12:15.124 #undef SPDK_CONFIG_ASAN 00:12:15.124 #undef SPDK_CONFIG_AVAHI 00:12:15.124 #undef SPDK_CONFIG_CET 00:12:15.124 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:15.124 #define SPDK_CONFIG_COVERAGE 1 00:12:15.124 #define SPDK_CONFIG_CROSS_PREFIX 00:12:15.124 #undef SPDK_CONFIG_CRYPTO 00:12:15.124 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:15.124 #undef SPDK_CONFIG_CUSTOMOCF 00:12:15.124 #undef SPDK_CONFIG_DAOS 00:12:15.124 #define SPDK_CONFIG_DAOS_DIR 00:12:15.124 #define SPDK_CONFIG_DEBUG 1 00:12:15.124 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:15.124 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:15.124 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:15.124 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:15.124 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:15.124 #undef SPDK_CONFIG_DPDK_UADK 00:12:15.124 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:15.124 #define SPDK_CONFIG_EXAMPLES 1 00:12:15.124 #undef SPDK_CONFIG_FC 00:12:15.124 #define SPDK_CONFIG_FC_PATH 00:12:15.124 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:15.124 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:15.124 #define SPDK_CONFIG_FSDEV 1 00:12:15.124 #undef SPDK_CONFIG_FUSE 00:12:15.124 #undef SPDK_CONFIG_FUZZER 00:12:15.124 #define SPDK_CONFIG_FUZZER_LIB 00:12:15.124 #undef SPDK_CONFIG_GOLANG 00:12:15.124 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:15.124 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:15.125 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:15.125 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:15.125 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:15.125 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:15.125 #undef SPDK_CONFIG_HAVE_LZ4 00:12:15.125 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:15.125 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:15.125 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:15.125 #define SPDK_CONFIG_IDXD 1 00:12:15.125 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:15.125 #undef SPDK_CONFIG_IPSEC_MB 00:12:15.125 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:15.125 #define SPDK_CONFIG_ISAL 1 00:12:15.125 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:15.125 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:15.125 #define SPDK_CONFIG_LIBDIR 00:12:15.125 #undef SPDK_CONFIG_LTO 00:12:15.125 #define SPDK_CONFIG_MAX_LCORES 128 00:12:15.125 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:15.125 #define SPDK_CONFIG_NVME_CUSE 1 00:12:15.125 #undef SPDK_CONFIG_OCF 00:12:15.125 #define SPDK_CONFIG_OCF_PATH 00:12:15.125 #define SPDK_CONFIG_OPENSSL_PATH 00:12:15.125 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:15.125 #define SPDK_CONFIG_PGO_DIR 00:12:15.125 #undef SPDK_CONFIG_PGO_USE 00:12:15.125 #define SPDK_CONFIG_PREFIX /usr/local 00:12:15.125 #undef SPDK_CONFIG_RAID5F 00:12:15.125 #undef SPDK_CONFIG_RBD 00:12:15.125 #define SPDK_CONFIG_RDMA 1 00:12:15.125 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:15.125 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:15.125 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:15.125 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:15.125 #define SPDK_CONFIG_SHARED 1 00:12:15.125 #undef SPDK_CONFIG_SMA 00:12:15.125 #define SPDK_CONFIG_TESTS 1 00:12:15.125 #undef SPDK_CONFIG_TSAN 00:12:15.125 #define SPDK_CONFIG_UBLK 1 00:12:15.125 #define SPDK_CONFIG_UBSAN 1 00:12:15.125 #undef SPDK_CONFIG_UNIT_TESTS 00:12:15.125 #undef SPDK_CONFIG_URING 00:12:15.125 #define SPDK_CONFIG_URING_PATH 00:12:15.125 #undef SPDK_CONFIG_URING_ZNS 00:12:15.125 #undef SPDK_CONFIG_USDT 00:12:15.125 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:15.125 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:15.125 #define SPDK_CONFIG_VFIO_USER 1 00:12:15.125 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:15.125 #define SPDK_CONFIG_VHOST 1 00:12:15.125 #define SPDK_CONFIG_VIRTIO 1 00:12:15.125 #undef SPDK_CONFIG_VTUNE 00:12:15.125 #define SPDK_CONFIG_VTUNE_DIR 00:12:15.125 #define SPDK_CONFIG_WERROR 1 00:12:15.125 #define SPDK_CONFIG_WPDK_DIR 00:12:15.125 #undef SPDK_CONFIG_XNVME 00:12:15.125 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:15.125 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:15.126 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1896876 ]] 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1896876 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.rK1gVq 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.rK1gVq/tests/target /tmp/spdk.rK1gVq 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:15.127 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189126680576 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6837280768 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97980375040 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1605632 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:15.128 * Looking for test storage... 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189126680576 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9051873280 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:15.128 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:15.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.129 --rc genhtml_branch_coverage=1 00:12:15.129 --rc genhtml_function_coverage=1 00:12:15.129 --rc genhtml_legend=1 00:12:15.129 --rc geninfo_all_blocks=1 00:12:15.129 --rc geninfo_unexecuted_blocks=1 00:12:15.129 00:12:15.129 ' 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:15.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.129 --rc genhtml_branch_coverage=1 00:12:15.129 --rc genhtml_function_coverage=1 00:12:15.129 --rc genhtml_legend=1 00:12:15.129 --rc geninfo_all_blocks=1 00:12:15.129 --rc geninfo_unexecuted_blocks=1 00:12:15.129 00:12:15.129 ' 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:15.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.129 --rc genhtml_branch_coverage=1 00:12:15.129 --rc genhtml_function_coverage=1 00:12:15.129 --rc genhtml_legend=1 00:12:15.129 --rc geninfo_all_blocks=1 00:12:15.129 --rc geninfo_unexecuted_blocks=1 00:12:15.129 00:12:15.129 ' 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:15.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.129 --rc genhtml_branch_coverage=1 00:12:15.129 --rc genhtml_function_coverage=1 00:12:15.129 --rc genhtml_legend=1 00:12:15.129 --rc geninfo_all_blocks=1 00:12:15.129 --rc geninfo_unexecuted_blocks=1 00:12:15.129 00:12:15.129 ' 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.129 12:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.399 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:20.400 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:20.400 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:20.400 Found net devices under 0000:86:00.0: cvl_0_0 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:20.400 Found net devices under 0000:86:00.1: cvl_0_1 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:12:20.400 00:12:20.400 --- 10.0.0.2 ping statistics --- 00:12:20.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.400 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:12:20.400 12:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:12:20.400 00:12:20.400 --- 10.0.0.1 ping statistics --- 00:12:20.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.400 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.400 ************************************ 00:12:20.400 START TEST nvmf_filesystem_no_in_capsule 00:12:20.400 ************************************ 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.400 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.401 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1899907 00:12:20.401 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1899907 00:12:20.401 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1899907 ']' 00:12:20.401 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.401 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.401 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.401 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.401 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.401 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.401 [2024-11-29 12:56:20.137388] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:12:20.401 [2024-11-29 12:56:20.137434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.401 [2024-11-29 12:56:20.204710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.660 [2024-11-29 12:56:20.248308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.660 [2024-11-29 12:56:20.248342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.660 [2024-11-29 12:56:20.248349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.660 [2024-11-29 12:56:20.248355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.660 [2024-11-29 12:56:20.248361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.660 [2024-11-29 12:56:20.249921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.660 [2024-11-29 12:56:20.250042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.660 [2024-11-29 12:56:20.250064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.660 [2024-11-29 12:56:20.250066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.660 [2024-11-29 12:56:20.384166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.660 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.919 Malloc1 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.919 [2024-11-29 12:56:20.541730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:20.919 { 00:12:20.919 "name": "Malloc1", 00:12:20.919 "aliases": [ 00:12:20.919 "dd9aa551-d183-47c3-b1af-75200f0773a9" 00:12:20.919 ], 00:12:20.919 "product_name": "Malloc disk", 00:12:20.919 "block_size": 512, 00:12:20.919 "num_blocks": 1048576, 00:12:20.919 "uuid": "dd9aa551-d183-47c3-b1af-75200f0773a9", 00:12:20.919 "assigned_rate_limits": { 00:12:20.919 "rw_ios_per_sec": 0, 00:12:20.919 "rw_mbytes_per_sec": 0, 00:12:20.919 "r_mbytes_per_sec": 0, 00:12:20.919 "w_mbytes_per_sec": 0 00:12:20.919 }, 00:12:20.919 "claimed": true, 00:12:20.919 "claim_type": "exclusive_write", 00:12:20.919 "zoned": false, 00:12:20.919 "supported_io_types": { 00:12:20.919 "read": true, 00:12:20.919 "write": true, 00:12:20.919 "unmap": true, 00:12:20.919 "flush": true, 00:12:20.919 "reset": true, 00:12:20.919 "nvme_admin": false, 00:12:20.919 "nvme_io": false, 00:12:20.919 "nvme_io_md": false, 00:12:20.919 "write_zeroes": true, 00:12:20.919 "zcopy": true, 00:12:20.919 "get_zone_info": false, 00:12:20.919 "zone_management": false, 00:12:20.919 "zone_append": false, 00:12:20.919 "compare": false, 00:12:20.919 "compare_and_write": false, 00:12:20.919 "abort": true, 00:12:20.919 "seek_hole": false, 00:12:20.919 "seek_data": false, 00:12:20.919 "copy": true, 00:12:20.919 "nvme_iov_md": false 00:12:20.919 }, 00:12:20.919 "memory_domains": [ 00:12:20.919 { 00:12:20.919 "dma_device_id": "system", 00:12:20.919 "dma_device_type": 1 00:12:20.919 }, 00:12:20.919 { 00:12:20.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.919 "dma_device_type": 2 00:12:20.919 } 00:12:20.919 ], 00:12:20.919 "driver_specific": {} 00:12:20.919 } 00:12:20.919 ]' 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:20.919 12:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.316 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.316 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:22.316 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.316 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:22.316 12:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:24.217 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:24.218 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:24.218 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:24.218 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:24.218 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:24.476 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.411 ************************************ 00:12:25.411 START TEST filesystem_ext4 00:12:25.411 ************************************ 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:25.411 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:25.411 mke2fs 1.47.0 (5-Feb-2023) 00:12:25.411 Discarding device blocks: 0/522240 done 00:12:25.411 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:25.411 Filesystem UUID: 2dfa2805-48b3-4ddd-af23-e652a408ece2 00:12:25.411 Superblock backups stored on blocks: 00:12:25.411 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:25.411 00:12:25.411 Allocating group tables: 0/64 done 00:12:25.411 Writing inode tables: 0/64 done 00:12:25.669 Creating journal (8192 blocks): done 00:12:27.561 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:12:27.561 00:12:27.561 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:27.561 12:56:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1899907 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:34.120 00:12:34.120 real 0m8.240s 00:12:34.120 user 0m0.017s 00:12:34.120 sys 0m0.084s 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:34.120 ************************************ 00:12:34.120 END TEST filesystem_ext4 00:12:34.120 ************************************ 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:34.120 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.121 ************************************ 00:12:34.121 START TEST filesystem_btrfs 00:12:34.121 ************************************ 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:34.121 btrfs-progs v6.8.1 00:12:34.121 See https://btrfs.readthedocs.io for more information. 00:12:34.121 00:12:34.121 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:34.121 NOTE: several default settings have changed in version 5.15, please make sure 00:12:34.121 this does not affect your deployments: 00:12:34.121 - DUP for metadata (-m dup) 00:12:34.121 - enabled no-holes (-O no-holes) 00:12:34.121 - enabled free-space-tree (-R free-space-tree) 00:12:34.121 00:12:34.121 Label: (null) 00:12:34.121 UUID: f25dfbcb-a0f3-4e27-8cf3-5ac474bbf72d 00:12:34.121 Node size: 16384 00:12:34.121 Sector size: 4096 (CPU page size: 4096) 00:12:34.121 Filesystem size: 510.00MiB 00:12:34.121 Block group profiles: 00:12:34.121 Data: single 8.00MiB 00:12:34.121 Metadata: DUP 32.00MiB 00:12:34.121 System: DUP 8.00MiB 00:12:34.121 SSD detected: yes 00:12:34.121 Zoned device: no 00:12:34.121 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:34.121 Checksum: crc32c 00:12:34.121 Number of devices: 1 00:12:34.121 Devices: 00:12:34.121 ID SIZE PATH 00:12:34.121 1 510.00MiB /dev/nvme0n1p1 00:12:34.121 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:34.121 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:34.380 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:34.380 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:34.380 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:34.380 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:34.380 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:34.380 12:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1899907 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:34.380 00:12:34.380 real 0m0.582s 00:12:34.380 user 0m0.030s 00:12:34.380 sys 0m0.108s 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:34.380 ************************************ 00:12:34.380 END TEST filesystem_btrfs 00:12:34.380 ************************************ 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.380 ************************************ 00:12:34.380 START TEST filesystem_xfs 00:12:34.380 ************************************ 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:34.380 12:56:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:34.380 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:34.380 = sectsz=512 attr=2, projid32bit=1 00:12:34.380 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:34.380 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:34.380 data = bsize=4096 blocks=130560, imaxpct=25 00:12:34.380 = sunit=0 swidth=0 blks 00:12:34.380 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:34.380 log =internal log bsize=4096 blocks=16384, version=2 00:12:34.380 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:34.380 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:35.755 Discarding blocks...Done. 00:12:35.755 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:35.755 12:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:37.131 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:37.131 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:37.131 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:37.131 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:37.131 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:37.131 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:37.389 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1899907 00:12:37.389 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:37.389 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:37.389 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:37.389 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:37.389 00:12:37.389 real 0m2.885s 00:12:37.389 user 0m0.024s 00:12:37.389 sys 0m0.075s 00:12:37.389 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.389 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:37.390 ************************************ 00:12:37.390 END TEST filesystem_xfs 00:12:37.390 ************************************ 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.390 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:37.649 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1899907 00:12:37.649 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1899907 ']' 00:12:37.649 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1899907 00:12:37.649 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:37.649 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.649 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1899907 00:12:37.649 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.649 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.649 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1899907' 00:12:37.649 killing process with pid 1899907 00:12:37.649 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1899907 00:12:37.649 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1899907 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:37.908 00:12:37.908 real 0m17.505s 00:12:37.908 user 1m8.902s 00:12:37.908 sys 0m1.406s 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.908 ************************************ 00:12:37.908 END TEST nvmf_filesystem_no_in_capsule 00:12:37.908 ************************************ 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.908 ************************************ 00:12:37.908 START TEST nvmf_filesystem_in_capsule 00:12:37.908 ************************************ 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1903117 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1903117 00:12:37.908 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.909 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1903117 ']' 00:12:37.909 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.909 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.909 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.909 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.909 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.909 [2024-11-29 12:56:37.703296] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:12:37.909 [2024-11-29 12:56:37.703339] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.167 [2024-11-29 12:56:37.763461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.167 [2024-11-29 12:56:37.804225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.167 [2024-11-29 12:56:37.804263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.167 [2024-11-29 12:56:37.804270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.167 [2024-11-29 12:56:37.804276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.167 [2024-11-29 12:56:37.804281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.167 [2024-11-29 12:56:37.805831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.167 [2024-11-29 12:56:37.805929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.167 [2024-11-29 12:56:37.806019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.167 [2024-11-29 12:56:37.806021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.167 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.167 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:38.168 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.168 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.168 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.168 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.168 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:38.168 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:38.168 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.168 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.168 [2024-11-29 12:56:37.952318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.168 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.168 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:38.168 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.168 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.426 Malloc1 00:12:38.426 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.426 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:38.426 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.426 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.427 [2024-11-29 12:56:38.117116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:38.427 { 00:12:38.427 "name": "Malloc1", 00:12:38.427 "aliases": [ 00:12:38.427 "faf3d1b7-2edf-44ab-9cea-f6af098cbe2f" 00:12:38.427 ], 00:12:38.427 "product_name": "Malloc disk", 00:12:38.427 "block_size": 512, 00:12:38.427 "num_blocks": 1048576, 00:12:38.427 "uuid": "faf3d1b7-2edf-44ab-9cea-f6af098cbe2f", 00:12:38.427 "assigned_rate_limits": { 00:12:38.427 "rw_ios_per_sec": 0, 00:12:38.427 "rw_mbytes_per_sec": 0, 00:12:38.427 "r_mbytes_per_sec": 0, 00:12:38.427 "w_mbytes_per_sec": 0 00:12:38.427 }, 00:12:38.427 "claimed": true, 00:12:38.427 "claim_type": "exclusive_write", 00:12:38.427 "zoned": false, 00:12:38.427 "supported_io_types": { 00:12:38.427 "read": true, 00:12:38.427 "write": true, 00:12:38.427 "unmap": true, 00:12:38.427 "flush": true, 00:12:38.427 "reset": true, 00:12:38.427 "nvme_admin": false, 00:12:38.427 "nvme_io": false, 00:12:38.427 "nvme_io_md": false, 00:12:38.427 "write_zeroes": true, 00:12:38.427 "zcopy": true, 00:12:38.427 "get_zone_info": false, 00:12:38.427 "zone_management": false, 00:12:38.427 "zone_append": false, 00:12:38.427 "compare": false, 00:12:38.427 "compare_and_write": false, 00:12:38.427 "abort": true, 00:12:38.427 "seek_hole": false, 00:12:38.427 "seek_data": false, 00:12:38.427 "copy": true, 00:12:38.427 "nvme_iov_md": false 00:12:38.427 }, 00:12:38.427 "memory_domains": [ 00:12:38.427 { 00:12:38.427 "dma_device_id": "system", 00:12:38.427 "dma_device_type": 1 00:12:38.427 }, 00:12:38.427 { 00:12:38.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.427 "dma_device_type": 2 00:12:38.427 } 00:12:38.427 ], 00:12:38.427 "driver_specific": {} 00:12:38.427 } 00:12:38.427 ]' 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:38.427 12:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.804 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.804 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:39.804 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.804 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:39.804 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:41.705 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:42.272 12:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:43.209 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:43.209 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:43.209 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:43.209 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.209 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:43.209 ************************************ 00:12:43.209 START TEST filesystem_in_capsule_ext4 00:12:43.209 ************************************ 00:12:43.209 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:43.209 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:43.209 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:43.209 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:43.209 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:43.209 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:43.210 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:43.210 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:43.210 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:43.210 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:43.210 12:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:43.210 mke2fs 1.47.0 (5-Feb-2023) 00:12:43.210 Discarding device blocks: 0/522240 done 00:12:43.210 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:43.210 Filesystem UUID: 9cfc2603-5eca-4354-94d1-7887e234e98d 00:12:43.210 Superblock backups stored on blocks: 00:12:43.210 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:43.210 00:12:43.210 Allocating group tables: 0/64 done 00:12:43.210 Writing inode tables: 0/64 done 00:12:43.469 Creating journal (8192 blocks): done 00:12:45.669 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:12:45.669 00:12:45.669 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:45.669 12:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1903117 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:52.232 00:12:52.232 real 0m8.438s 00:12:52.232 user 0m0.033s 00:12:52.232 sys 0m0.066s 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:52.232 ************************************ 00:12:52.232 END TEST filesystem_in_capsule_ext4 00:12:52.232 ************************************ 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:52.232 ************************************ 00:12:52.232 START TEST filesystem_in_capsule_btrfs 00:12:52.232 ************************************ 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:52.232 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:52.232 btrfs-progs v6.8.1 00:12:52.232 See https://btrfs.readthedocs.io for more information. 00:12:52.232 00:12:52.232 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:52.233 NOTE: several default settings have changed in version 5.15, please make sure 00:12:52.233 this does not affect your deployments: 00:12:52.233 - DUP for metadata (-m dup) 00:12:52.233 - enabled no-holes (-O no-holes) 00:12:52.233 - enabled free-space-tree (-R free-space-tree) 00:12:52.233 00:12:52.233 Label: (null) 00:12:52.233 UUID: 6118b676-d180-430d-af51-4d2d51c78246 00:12:52.233 Node size: 16384 00:12:52.233 Sector size: 4096 (CPU page size: 4096) 00:12:52.233 Filesystem size: 510.00MiB 00:12:52.233 Block group profiles: 00:12:52.233 Data: single 8.00MiB 00:12:52.233 Metadata: DUP 32.00MiB 00:12:52.233 System: DUP 8.00MiB 00:12:52.233 SSD detected: yes 00:12:52.233 Zoned device: no 00:12:52.233 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:52.233 Checksum: crc32c 00:12:52.233 Number of devices: 1 00:12:52.233 Devices: 00:12:52.233 ID SIZE PATH 00:12:52.233 1 510.00MiB /dev/nvme0n1p1 00:12:52.233 00:12:52.233 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:52.233 12:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:53.169 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:53.169 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:53.169 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:53.169 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:53.169 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:53.169 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:53.169 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1903117 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:53.170 00:12:53.170 real 0m1.444s 00:12:53.170 user 0m0.023s 00:12:53.170 sys 0m0.115s 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:53.170 ************************************ 00:12:53.170 END TEST filesystem_in_capsule_btrfs 00:12:53.170 ************************************ 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.170 ************************************ 00:12:53.170 START TEST filesystem_in_capsule_xfs 00:12:53.170 ************************************ 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:53.170 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:53.429 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:53.429 = sectsz=512 attr=2, projid32bit=1 00:12:53.429 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:53.429 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:53.429 data = bsize=4096 blocks=130560, imaxpct=25 00:12:53.429 = sunit=0 swidth=0 blks 00:12:53.429 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:53.429 log =internal log bsize=4096 blocks=16384, version=2 00:12:53.429 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:53.429 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:53.997 Discarding blocks...Done. 00:12:53.997 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:53.997 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1903117 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:55.902 00:12:55.902 real 0m2.732s 00:12:55.902 user 0m0.033s 00:12:55.902 sys 0m0.066s 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:55.902 ************************************ 00:12:55.902 END TEST filesystem_in_capsule_xfs 00:12:55.902 ************************************ 00:12:55.902 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:56.162 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:56.162 12:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1903117 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1903117 ']' 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1903117 00:12:56.421 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:56.422 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.422 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1903117 00:12:56.422 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.422 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.422 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1903117' 00:12:56.422 killing process with pid 1903117 00:12:56.422 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1903117 00:12:56.422 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1903117 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:56.990 00:12:56.990 real 0m18.861s 00:12:56.990 user 1m14.339s 00:12:56.990 sys 0m1.441s 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:56.990 ************************************ 00:12:56.990 END TEST nvmf_filesystem_in_capsule 00:12:56.990 ************************************ 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:56.990 rmmod nvme_tcp 00:12:56.990 rmmod nvme_fabrics 00:12:56.990 rmmod nvme_keyring 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.990 12:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.894 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:58.894 00:12:58.894 real 0m44.204s 00:12:58.894 user 2m24.934s 00:12:58.894 sys 0m6.874s 00:12:58.894 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.894 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:58.894 ************************************ 00:12:58.894 END TEST nvmf_filesystem 00:12:58.894 ************************************ 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.154 ************************************ 00:12:59.154 START TEST nvmf_target_discovery 00:12:59.154 ************************************ 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:59.154 * Looking for test storage... 00:12:59.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:59.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.154 --rc genhtml_branch_coverage=1 00:12:59.154 --rc genhtml_function_coverage=1 00:12:59.154 --rc genhtml_legend=1 00:12:59.154 --rc geninfo_all_blocks=1 00:12:59.154 --rc geninfo_unexecuted_blocks=1 00:12:59.154 00:12:59.154 ' 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:59.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.154 --rc genhtml_branch_coverage=1 00:12:59.154 --rc genhtml_function_coverage=1 00:12:59.154 --rc genhtml_legend=1 00:12:59.154 --rc geninfo_all_blocks=1 00:12:59.154 --rc geninfo_unexecuted_blocks=1 00:12:59.154 00:12:59.154 ' 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:59.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.154 --rc genhtml_branch_coverage=1 00:12:59.154 --rc genhtml_function_coverage=1 00:12:59.154 --rc genhtml_legend=1 00:12:59.154 --rc geninfo_all_blocks=1 00:12:59.154 --rc geninfo_unexecuted_blocks=1 00:12:59.154 00:12:59.154 ' 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:59.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.154 --rc genhtml_branch_coverage=1 00:12:59.154 --rc genhtml_function_coverage=1 00:12:59.154 --rc genhtml_legend=1 00:12:59.154 --rc geninfo_all_blocks=1 00:12:59.154 --rc geninfo_unexecuted_blocks=1 00:12:59.154 00:12:59.154 ' 00:12:59.154 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.155 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.414 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:59.414 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:59.415 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.415 12:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:04.735 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:04.735 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:04.735 Found net devices under 0000:86:00.0: cvl_0_0 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:04.735 Found net devices under 0000:86:00.1: cvl_0_1 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:04.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:13:04.735 00:13:04.735 --- 10.0.0.2 ping statistics --- 00:13:04.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.735 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:13:04.735 00:13:04.735 --- 10.0.0.1 ping statistics --- 00:13:04.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.735 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1909980 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1909980 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1909980 ']' 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.735 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.735 [2024-11-29 12:57:04.495926] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:13:04.735 [2024-11-29 12:57:04.495991] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.994 [2024-11-29 12:57:04.564517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.994 [2024-11-29 12:57:04.608125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.994 [2024-11-29 12:57:04.608163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.994 [2024-11-29 12:57:04.608170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.994 [2024-11-29 12:57:04.608176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.994 [2024-11-29 12:57:04.608182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.994 [2024-11-29 12:57:04.609775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.994 [2024-11-29 12:57:04.609794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.994 [2024-11-29 12:57:04.609888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.994 [2024-11-29 12:57:04.609889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.994 [2024-11-29 12:57:04.753109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.994 Null1 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.994 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:04.994 [2024-11-29 12:57:04.811106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.252 Null2 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.252 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.253 Null3 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.253 Null4 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.253 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:05.510 00:13:05.510 Discovery Log Number of Records 6, Generation counter 6 00:13:05.510 =====Discovery Log Entry 0====== 00:13:05.510 trtype: tcp 00:13:05.510 adrfam: ipv4 00:13:05.510 subtype: current discovery subsystem 00:13:05.510 treq: not required 00:13:05.510 portid: 0 00:13:05.510 trsvcid: 4420 00:13:05.510 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:05.510 traddr: 10.0.0.2 00:13:05.510 eflags: explicit discovery connections, duplicate discovery information 00:13:05.510 sectype: none 00:13:05.510 =====Discovery Log Entry 1====== 00:13:05.510 trtype: tcp 00:13:05.510 adrfam: ipv4 00:13:05.510 subtype: nvme subsystem 00:13:05.510 treq: not required 00:13:05.510 portid: 0 00:13:05.510 trsvcid: 4420 00:13:05.510 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:05.510 traddr: 10.0.0.2 00:13:05.510 eflags: none 00:13:05.510 sectype: none 00:13:05.510 =====Discovery Log Entry 2====== 00:13:05.510 trtype: tcp 00:13:05.510 adrfam: ipv4 00:13:05.510 subtype: nvme subsystem 00:13:05.510 treq: not required 00:13:05.510 portid: 0 00:13:05.510 trsvcid: 4420 00:13:05.510 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:05.510 traddr: 10.0.0.2 00:13:05.510 eflags: none 00:13:05.510 sectype: none 00:13:05.510 =====Discovery Log Entry 3====== 00:13:05.510 trtype: tcp 00:13:05.510 adrfam: ipv4 00:13:05.510 subtype: nvme subsystem 00:13:05.510 treq: not required 00:13:05.510 portid: 0 00:13:05.510 trsvcid: 4420 00:13:05.511 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:05.511 traddr: 10.0.0.2 00:13:05.511 eflags: none 00:13:05.511 sectype: none 00:13:05.511 =====Discovery Log Entry 4====== 00:13:05.511 trtype: tcp 00:13:05.511 adrfam: ipv4 00:13:05.511 subtype: nvme subsystem 00:13:05.511 treq: not required 00:13:05.511 portid: 0 00:13:05.511 trsvcid: 4420 00:13:05.511 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:05.511 traddr: 10.0.0.2 00:13:05.511 eflags: none 00:13:05.511 sectype: none 00:13:05.511 =====Discovery Log Entry 5====== 00:13:05.511 trtype: tcp 00:13:05.511 adrfam: ipv4 00:13:05.511 subtype: discovery subsystem referral 00:13:05.511 treq: not required 00:13:05.511 portid: 0 00:13:05.511 trsvcid: 4430 00:13:05.511 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:05.511 traddr: 10.0.0.2 00:13:05.511 eflags: none 00:13:05.511 sectype: none 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:05.511 Perform nvmf subsystem discovery via RPC 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.511 [ 00:13:05.511 { 00:13:05.511 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:05.511 "subtype": "Discovery", 00:13:05.511 "listen_addresses": [ 00:13:05.511 { 00:13:05.511 "trtype": "TCP", 00:13:05.511 "adrfam": "IPv4", 00:13:05.511 "traddr": "10.0.0.2", 00:13:05.511 "trsvcid": "4420" 00:13:05.511 } 00:13:05.511 ], 00:13:05.511 "allow_any_host": true, 00:13:05.511 "hosts": [] 00:13:05.511 }, 00:13:05.511 { 00:13:05.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:05.511 "subtype": "NVMe", 00:13:05.511 "listen_addresses": [ 00:13:05.511 { 00:13:05.511 "trtype": "TCP", 00:13:05.511 "adrfam": "IPv4", 00:13:05.511 "traddr": "10.0.0.2", 00:13:05.511 "trsvcid": "4420" 00:13:05.511 } 00:13:05.511 ], 00:13:05.511 "allow_any_host": true, 00:13:05.511 "hosts": [], 00:13:05.511 "serial_number": "SPDK00000000000001", 00:13:05.511 "model_number": "SPDK bdev Controller", 00:13:05.511 "max_namespaces": 32, 00:13:05.511 "min_cntlid": 1, 00:13:05.511 "max_cntlid": 65519, 00:13:05.511 "namespaces": [ 00:13:05.511 { 00:13:05.511 "nsid": 1, 00:13:05.511 "bdev_name": "Null1", 00:13:05.511 "name": "Null1", 00:13:05.511 "nguid": "CC4E2A3147B94DADBA89CE1FA639ABAF", 00:13:05.511 "uuid": "cc4e2a31-47b9-4dad-ba89-ce1fa639abaf" 00:13:05.511 } 00:13:05.511 ] 00:13:05.511 }, 00:13:05.511 { 00:13:05.511 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:05.511 "subtype": "NVMe", 00:13:05.511 "listen_addresses": [ 00:13:05.511 { 00:13:05.511 "trtype": "TCP", 00:13:05.511 "adrfam": "IPv4", 00:13:05.511 "traddr": "10.0.0.2", 00:13:05.511 "trsvcid": "4420" 00:13:05.511 } 00:13:05.511 ], 00:13:05.511 "allow_any_host": true, 00:13:05.511 "hosts": [], 00:13:05.511 "serial_number": "SPDK00000000000002", 00:13:05.511 "model_number": "SPDK bdev Controller", 00:13:05.511 "max_namespaces": 32, 00:13:05.511 "min_cntlid": 1, 00:13:05.511 "max_cntlid": 65519, 00:13:05.511 "namespaces": [ 00:13:05.511 { 00:13:05.511 "nsid": 1, 00:13:05.511 "bdev_name": "Null2", 00:13:05.511 "name": "Null2", 00:13:05.511 "nguid": "D122EB2264F94D75BBB7CCB90FAA212D", 00:13:05.511 "uuid": "d122eb22-64f9-4d75-bbb7-ccb90faa212d" 00:13:05.511 } 00:13:05.511 ] 00:13:05.511 }, 00:13:05.511 { 00:13:05.511 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:05.511 "subtype": "NVMe", 00:13:05.511 "listen_addresses": [ 00:13:05.511 { 00:13:05.511 "trtype": "TCP", 00:13:05.511 "adrfam": "IPv4", 00:13:05.511 "traddr": "10.0.0.2", 00:13:05.511 "trsvcid": "4420" 00:13:05.511 } 00:13:05.511 ], 00:13:05.511 "allow_any_host": true, 00:13:05.511 "hosts": [], 00:13:05.511 "serial_number": "SPDK00000000000003", 00:13:05.511 "model_number": "SPDK bdev Controller", 00:13:05.511 "max_namespaces": 32, 00:13:05.511 "min_cntlid": 1, 00:13:05.511 "max_cntlid": 65519, 00:13:05.511 "namespaces": [ 00:13:05.511 { 00:13:05.511 "nsid": 1, 00:13:05.511 "bdev_name": "Null3", 00:13:05.511 "name": "Null3", 00:13:05.511 "nguid": "C0DE2D2A28AF4A418B1639581304056A", 00:13:05.511 "uuid": "c0de2d2a-28af-4a41-8b16-39581304056a" 00:13:05.511 } 00:13:05.511 ] 00:13:05.511 }, 00:13:05.511 { 00:13:05.511 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:05.511 "subtype": "NVMe", 00:13:05.511 "listen_addresses": [ 00:13:05.511 { 00:13:05.511 "trtype": "TCP", 00:13:05.511 "adrfam": "IPv4", 00:13:05.511 "traddr": "10.0.0.2", 00:13:05.511 "trsvcid": "4420" 00:13:05.511 } 00:13:05.511 ], 00:13:05.511 "allow_any_host": true, 00:13:05.511 "hosts": [], 00:13:05.511 "serial_number": "SPDK00000000000004", 00:13:05.511 "model_number": "SPDK bdev Controller", 00:13:05.511 "max_namespaces": 32, 00:13:05.511 "min_cntlid": 1, 00:13:05.511 "max_cntlid": 65519, 00:13:05.511 "namespaces": [ 00:13:05.511 { 00:13:05.511 "nsid": 1, 00:13:05.511 "bdev_name": "Null4", 00:13:05.511 "name": "Null4", 00:13:05.511 "nguid": "04C394E1448942E38C1CC700B6CE8BD2", 00:13:05.511 "uuid": "04c394e1-4489-42e3-8c1c-c700b6ce8bd2" 00:13:05.511 } 00:13:05.511 ] 00:13:05.511 } 00:13:05.511 ] 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:05.511 rmmod nvme_tcp 00:13:05.511 rmmod nvme_fabrics 00:13:05.511 rmmod nvme_keyring 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1909980 ']' 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1909980 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1909980 ']' 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1909980 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.511 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1909980 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1909980' 00:13:05.769 killing process with pid 1909980 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1909980 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1909980 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.769 12:57:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:08.306 00:13:08.306 real 0m8.856s 00:13:08.306 user 0m5.498s 00:13:08.306 sys 0m4.448s 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:08.306 ************************************ 00:13:08.306 END TEST nvmf_target_discovery 00:13:08.306 ************************************ 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:08.306 ************************************ 00:13:08.306 START TEST nvmf_referrals 00:13:08.306 ************************************ 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:08.306 * Looking for test storage... 00:13:08.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:08.306 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:08.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.307 --rc genhtml_branch_coverage=1 00:13:08.307 --rc genhtml_function_coverage=1 00:13:08.307 --rc genhtml_legend=1 00:13:08.307 --rc geninfo_all_blocks=1 00:13:08.307 --rc geninfo_unexecuted_blocks=1 00:13:08.307 00:13:08.307 ' 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:08.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.307 --rc genhtml_branch_coverage=1 00:13:08.307 --rc genhtml_function_coverage=1 00:13:08.307 --rc genhtml_legend=1 00:13:08.307 --rc geninfo_all_blocks=1 00:13:08.307 --rc geninfo_unexecuted_blocks=1 00:13:08.307 00:13:08.307 ' 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:08.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.307 --rc genhtml_branch_coverage=1 00:13:08.307 --rc genhtml_function_coverage=1 00:13:08.307 --rc genhtml_legend=1 00:13:08.307 --rc geninfo_all_blocks=1 00:13:08.307 --rc geninfo_unexecuted_blocks=1 00:13:08.307 00:13:08.307 ' 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:08.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.307 --rc genhtml_branch_coverage=1 00:13:08.307 --rc genhtml_function_coverage=1 00:13:08.307 --rc genhtml_legend=1 00:13:08.307 --rc geninfo_all_blocks=1 00:13:08.307 --rc geninfo_unexecuted_blocks=1 00:13:08.307 00:13:08.307 ' 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:08.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:08.307 12:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:13.581 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.581 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:13.582 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:13.582 Found net devices under 0000:86:00.0: cvl_0_0 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:13.582 Found net devices under 0000:86:00.1: cvl_0_1 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.582 12:57:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:13.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:13:13.582 00:13:13.582 --- 10.0.0.2 ping statistics --- 00:13:13.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.582 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:13:13.582 00:13:13.582 --- 10.0.0.1 ping statistics --- 00:13:13.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.582 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1913932 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1913932 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1913932 ']' 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.582 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.583 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.583 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.583 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.583 [2024-11-29 12:57:13.225349] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:13:13.583 [2024-11-29 12:57:13.225399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.583 [2024-11-29 12:57:13.292831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:13.583 [2024-11-29 12:57:13.335352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.583 [2024-11-29 12:57:13.335394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.583 [2024-11-29 12:57:13.335402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.583 [2024-11-29 12:57:13.335408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.583 [2024-11-29 12:57:13.335413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.583 [2024-11-29 12:57:13.337000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.583 [2024-11-29 12:57:13.337100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.583 [2024-11-29 12:57:13.337165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.583 [2024-11-29 12:57:13.337166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.842 [2024-11-29 12:57:13.488112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.842 [2024-11-29 12:57:13.510096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:13.842 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:14.101 12:57:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:14.359 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:14.360 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:14.360 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:14.360 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:14.360 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:14.360 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:14.360 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:14.618 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:14.618 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:14.618 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:14.618 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:14.618 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:14.618 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:14.618 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.877 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:15.135 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:15.393 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:15.393 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:15.393 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:15.393 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:15.393 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:15.393 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:15.393 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:15.393 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:15.393 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.393 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.651 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.651 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:15.651 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:15.651 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.651 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:15.651 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.651 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:15.651 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:15.651 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:15.651 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:15.651 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:15.651 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:15.651 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:15.909 rmmod nvme_tcp 00:13:15.909 rmmod nvme_fabrics 00:13:15.909 rmmod nvme_keyring 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1913932 ']' 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1913932 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1913932 ']' 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1913932 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1913932 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1913932' 00:13:15.909 killing process with pid 1913932 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1913932 00:13:15.909 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1913932 00:13:16.168 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:16.168 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:16.168 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:16.168 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:16.168 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:16.168 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:16.168 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:16.168 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:16.168 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:16.168 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.168 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.168 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.072 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:18.072 00:13:18.072 real 0m10.168s 00:13:18.072 user 0m12.073s 00:13:18.072 sys 0m4.762s 00:13:18.072 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.072 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:18.072 ************************************ 00:13:18.072 END TEST nvmf_referrals 00:13:18.072 ************************************ 00:13:18.332 12:57:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:18.332 12:57:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:18.332 12:57:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.332 12:57:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:18.332 ************************************ 00:13:18.332 START TEST nvmf_connect_disconnect 00:13:18.332 ************************************ 00:13:18.332 12:57:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:18.332 * Looking for test storage... 00:13:18.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:18.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.332 --rc genhtml_branch_coverage=1 00:13:18.332 --rc genhtml_function_coverage=1 00:13:18.332 --rc genhtml_legend=1 00:13:18.332 --rc geninfo_all_blocks=1 00:13:18.332 --rc geninfo_unexecuted_blocks=1 00:13:18.332 00:13:18.332 ' 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:18.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.332 --rc genhtml_branch_coverage=1 00:13:18.332 --rc genhtml_function_coverage=1 00:13:18.332 --rc genhtml_legend=1 00:13:18.332 --rc geninfo_all_blocks=1 00:13:18.332 --rc geninfo_unexecuted_blocks=1 00:13:18.332 00:13:18.332 ' 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:18.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.332 --rc genhtml_branch_coverage=1 00:13:18.332 --rc genhtml_function_coverage=1 00:13:18.332 --rc genhtml_legend=1 00:13:18.332 --rc geninfo_all_blocks=1 00:13:18.332 --rc geninfo_unexecuted_blocks=1 00:13:18.332 00:13:18.332 ' 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:18.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.332 --rc genhtml_branch_coverage=1 00:13:18.332 --rc genhtml_function_coverage=1 00:13:18.332 --rc genhtml_legend=1 00:13:18.332 --rc geninfo_all_blocks=1 00:13:18.332 --rc geninfo_unexecuted_blocks=1 00:13:18.332 00:13:18.332 ' 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.332 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:18.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.333 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.593 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:18.593 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:18.593 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:18.593 12:57:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:23.869 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:23.869 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:23.869 Found net devices under 0000:86:00.0: cvl_0_0 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:23.869 Found net devices under 0000:86:00.1: cvl_0_1 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:23.869 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:23.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:13:23.870 00:13:23.870 --- 10.0.0.2 ping statistics --- 00:13:23.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.870 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:13:23.870 00:13:23.870 --- 10.0.0.1 ping statistics --- 00:13:23.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.870 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1917803 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1917803 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1917803 ']' 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.870 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:23.870 [2024-11-29 12:57:23.636983] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:13:23.870 [2024-11-29 12:57:23.637033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.129 [2024-11-29 12:57:23.704968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.129 [2024-11-29 12:57:23.748039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.129 [2024-11-29 12:57:23.748077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.129 [2024-11-29 12:57:23.748084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.129 [2024-11-29 12:57:23.748090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.129 [2024-11-29 12:57:23.748096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.129 [2024-11-29 12:57:23.749723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.129 [2024-11-29 12:57:23.749819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.129 [2024-11-29 12:57:23.749905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.129 [2024-11-29 12:57:23.749904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.129 [2024-11-29 12:57:23.893312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.129 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.388 [2024-11-29 12:57:23.952749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.388 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.388 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:24.388 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:24.388 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:27.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:41.061 rmmod nvme_tcp 00:13:41.061 rmmod nvme_fabrics 00:13:41.061 rmmod nvme_keyring 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1917803 ']' 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1917803 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1917803 ']' 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1917803 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1917803 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1917803' 00:13:41.061 killing process with pid 1917803 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1917803 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1917803 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.061 12:57:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.971 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:42.971 00:13:42.971 real 0m24.668s 00:13:42.971 user 1m8.165s 00:13:42.971 sys 0m5.445s 00:13:42.971 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.971 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:42.971 ************************************ 00:13:42.971 END TEST nvmf_connect_disconnect 00:13:42.971 ************************************ 00:13:42.971 12:57:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:42.971 12:57:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.971 12:57:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.971 12:57:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.971 ************************************ 00:13:42.971 START TEST nvmf_multitarget 00:13:42.971 ************************************ 00:13:42.971 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:42.971 * Looking for test storage... 00:13:42.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.971 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:42.971 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:13:42.971 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:43.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.232 --rc genhtml_branch_coverage=1 00:13:43.232 --rc genhtml_function_coverage=1 00:13:43.232 --rc genhtml_legend=1 00:13:43.232 --rc geninfo_all_blocks=1 00:13:43.232 --rc geninfo_unexecuted_blocks=1 00:13:43.232 00:13:43.232 ' 00:13:43.232 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.233 --rc genhtml_branch_coverage=1 00:13:43.233 --rc genhtml_function_coverage=1 00:13:43.233 --rc genhtml_legend=1 00:13:43.233 --rc geninfo_all_blocks=1 00:13:43.233 --rc geninfo_unexecuted_blocks=1 00:13:43.233 00:13:43.233 ' 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.233 --rc genhtml_branch_coverage=1 00:13:43.233 --rc genhtml_function_coverage=1 00:13:43.233 --rc genhtml_legend=1 00:13:43.233 --rc geninfo_all_blocks=1 00:13:43.233 --rc geninfo_unexecuted_blocks=1 00:13:43.233 00:13:43.233 ' 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.233 --rc genhtml_branch_coverage=1 00:13:43.233 --rc genhtml_function_coverage=1 00:13:43.233 --rc genhtml_legend=1 00:13:43.233 --rc geninfo_all_blocks=1 00:13:43.233 --rc geninfo_unexecuted_blocks=1 00:13:43.233 00:13:43.233 ' 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:43.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:43.233 12:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:48.502 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.502 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:48.502 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:48.502 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:48.502 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:48.502 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:48.502 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:48.503 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:48.503 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:48.503 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:48.503 Found net devices under 0000:86:00.0: cvl_0_0 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:48.503 Found net devices under 0000:86:00.1: cvl_0_1 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:48.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:13:48.503 00:13:48.503 --- 10.0.0.2 ping statistics --- 00:13:48.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.503 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:13:48.503 00:13:48.503 --- 10.0.0.1 ping statistics --- 00:13:48.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.503 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:48.503 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1924179 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1924179 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1924179 ']' 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.504 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:48.504 [2024-11-29 12:57:48.318911] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:13:48.504 [2024-11-29 12:57:48.318970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.762 [2024-11-29 12:57:48.385405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.762 [2024-11-29 12:57:48.428168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.762 [2024-11-29 12:57:48.428206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.762 [2024-11-29 12:57:48.428213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.762 [2024-11-29 12:57:48.428219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.763 [2024-11-29 12:57:48.428224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.763 [2024-11-29 12:57:48.429641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.763 [2024-11-29 12:57:48.429731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.763 [2024-11-29 12:57:48.429818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.763 [2024-11-29 12:57:48.429820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.763 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.763 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:48.763 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:48.763 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:48.763 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:48.763 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.763 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:48.763 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:48.763 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:49.021 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:49.021 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:49.021 "nvmf_tgt_1" 00:13:49.021 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:49.278 "nvmf_tgt_2" 00:13:49.278 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:49.278 12:57:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:49.278 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:49.278 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:49.278 true 00:13:49.537 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:49.537 true 00:13:49.537 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:49.537 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:49.537 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:49.537 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:49.537 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:49.537 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:49.537 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:49.537 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.537 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:49.537 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.537 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.537 rmmod nvme_tcp 00:13:49.796 rmmod nvme_fabrics 00:13:49.796 rmmod nvme_keyring 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1924179 ']' 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1924179 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1924179 ']' 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1924179 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1924179 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1924179' 00:13:49.796 killing process with pid 1924179 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1924179 00:13:49.796 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1924179 00:13:50.055 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:50.055 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:50.055 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:50.055 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:50.055 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:50.055 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:50.055 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:50.055 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:50.055 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:50.055 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.055 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.055 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.960 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:51.960 00:13:51.960 real 0m9.008s 00:13:51.960 user 0m7.128s 00:13:51.960 sys 0m4.458s 00:13:51.960 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.960 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:51.960 ************************************ 00:13:51.960 END TEST nvmf_multitarget 00:13:51.960 ************************************ 00:13:51.960 12:57:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:51.960 12:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:51.960 12:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.960 12:57:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:51.960 ************************************ 00:13:51.960 START TEST nvmf_rpc 00:13:51.960 ************************************ 00:13:51.960 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:52.219 * Looking for test storage... 00:13:52.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.219 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:52.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.219 --rc genhtml_branch_coverage=1 00:13:52.219 --rc genhtml_function_coverage=1 00:13:52.219 --rc genhtml_legend=1 00:13:52.219 --rc geninfo_all_blocks=1 00:13:52.219 --rc geninfo_unexecuted_blocks=1 00:13:52.219 00:13:52.219 ' 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:52.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.220 --rc genhtml_branch_coverage=1 00:13:52.220 --rc genhtml_function_coverage=1 00:13:52.220 --rc genhtml_legend=1 00:13:52.220 --rc geninfo_all_blocks=1 00:13:52.220 --rc geninfo_unexecuted_blocks=1 00:13:52.220 00:13:52.220 ' 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:52.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.220 --rc genhtml_branch_coverage=1 00:13:52.220 --rc genhtml_function_coverage=1 00:13:52.220 --rc genhtml_legend=1 00:13:52.220 --rc geninfo_all_blocks=1 00:13:52.220 --rc geninfo_unexecuted_blocks=1 00:13:52.220 00:13:52.220 ' 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:52.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.220 --rc genhtml_branch_coverage=1 00:13:52.220 --rc genhtml_function_coverage=1 00:13:52.220 --rc genhtml_legend=1 00:13:52.220 --rc geninfo_all_blocks=1 00:13:52.220 --rc geninfo_unexecuted_blocks=1 00:13:52.220 00:13:52.220 ' 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.220 12:57:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:57.491 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:57.491 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:57.491 Found net devices under 0000:86:00.0: cvl_0_0 00:13:57.491 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:57.492 Found net devices under 0000:86:00.1: cvl_0_1 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:57.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:13:57.492 00:13:57.492 --- 10.0.0.2 ping statistics --- 00:13:57.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.492 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:13:57.492 00:13:57.492 --- 10.0.0.1 ping statistics --- 00:13:57.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.492 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1927769 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1927769 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1927769 ']' 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.492 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.750 [2024-11-29 12:57:57.344692] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:13:57.750 [2024-11-29 12:57:57.344744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.750 [2024-11-29 12:57:57.411767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:57.750 [2024-11-29 12:57:57.454885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.750 [2024-11-29 12:57:57.454924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.750 [2024-11-29 12:57:57.454930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.750 [2024-11-29 12:57:57.454937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.750 [2024-11-29 12:57:57.454942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.751 [2024-11-29 12:57:57.456425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.751 [2024-11-29 12:57:57.456521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.751 [2024-11-29 12:57:57.456609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.751 [2024-11-29 12:57:57.456611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.751 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.751 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:57.751 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:57.751 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:57.751 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.008 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:58.009 "tick_rate": 2300000000, 00:13:58.009 "poll_groups": [ 00:13:58.009 { 00:13:58.009 "name": "nvmf_tgt_poll_group_000", 00:13:58.009 "admin_qpairs": 0, 00:13:58.009 "io_qpairs": 0, 00:13:58.009 "current_admin_qpairs": 0, 00:13:58.009 "current_io_qpairs": 0, 00:13:58.009 "pending_bdev_io": 0, 00:13:58.009 "completed_nvme_io": 0, 00:13:58.009 "transports": [] 00:13:58.009 }, 00:13:58.009 { 00:13:58.009 "name": "nvmf_tgt_poll_group_001", 00:13:58.009 "admin_qpairs": 0, 00:13:58.009 "io_qpairs": 0, 00:13:58.009 "current_admin_qpairs": 0, 00:13:58.009 "current_io_qpairs": 0, 00:13:58.009 "pending_bdev_io": 0, 00:13:58.009 "completed_nvme_io": 0, 00:13:58.009 "transports": [] 00:13:58.009 }, 00:13:58.009 { 00:13:58.009 "name": "nvmf_tgt_poll_group_002", 00:13:58.009 "admin_qpairs": 0, 00:13:58.009 "io_qpairs": 0, 00:13:58.009 "current_admin_qpairs": 0, 00:13:58.009 "current_io_qpairs": 0, 00:13:58.009 "pending_bdev_io": 0, 00:13:58.009 "completed_nvme_io": 0, 00:13:58.009 "transports": [] 00:13:58.009 }, 00:13:58.009 { 00:13:58.009 "name": "nvmf_tgt_poll_group_003", 00:13:58.009 "admin_qpairs": 0, 00:13:58.009 "io_qpairs": 0, 00:13:58.009 "current_admin_qpairs": 0, 00:13:58.009 "current_io_qpairs": 0, 00:13:58.009 "pending_bdev_io": 0, 00:13:58.009 "completed_nvme_io": 0, 00:13:58.009 "transports": [] 00:13:58.009 } 00:13:58.009 ] 00:13:58.009 }' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.009 [2024-11-29 12:57:57.708185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:58.009 "tick_rate": 2300000000, 00:13:58.009 "poll_groups": [ 00:13:58.009 { 00:13:58.009 "name": "nvmf_tgt_poll_group_000", 00:13:58.009 "admin_qpairs": 0, 00:13:58.009 "io_qpairs": 0, 00:13:58.009 "current_admin_qpairs": 0, 00:13:58.009 "current_io_qpairs": 0, 00:13:58.009 "pending_bdev_io": 0, 00:13:58.009 "completed_nvme_io": 0, 00:13:58.009 "transports": [ 00:13:58.009 { 00:13:58.009 "trtype": "TCP" 00:13:58.009 } 00:13:58.009 ] 00:13:58.009 }, 00:13:58.009 { 00:13:58.009 "name": "nvmf_tgt_poll_group_001", 00:13:58.009 "admin_qpairs": 0, 00:13:58.009 "io_qpairs": 0, 00:13:58.009 "current_admin_qpairs": 0, 00:13:58.009 "current_io_qpairs": 0, 00:13:58.009 "pending_bdev_io": 0, 00:13:58.009 "completed_nvme_io": 0, 00:13:58.009 "transports": [ 00:13:58.009 { 00:13:58.009 "trtype": "TCP" 00:13:58.009 } 00:13:58.009 ] 00:13:58.009 }, 00:13:58.009 { 00:13:58.009 "name": "nvmf_tgt_poll_group_002", 00:13:58.009 "admin_qpairs": 0, 00:13:58.009 "io_qpairs": 0, 00:13:58.009 "current_admin_qpairs": 0, 00:13:58.009 "current_io_qpairs": 0, 00:13:58.009 "pending_bdev_io": 0, 00:13:58.009 "completed_nvme_io": 0, 00:13:58.009 "transports": [ 00:13:58.009 { 00:13:58.009 "trtype": "TCP" 00:13:58.009 } 00:13:58.009 ] 00:13:58.009 }, 00:13:58.009 { 00:13:58.009 "name": "nvmf_tgt_poll_group_003", 00:13:58.009 "admin_qpairs": 0, 00:13:58.009 "io_qpairs": 0, 00:13:58.009 "current_admin_qpairs": 0, 00:13:58.009 "current_io_qpairs": 0, 00:13:58.009 "pending_bdev_io": 0, 00:13:58.009 "completed_nvme_io": 0, 00:13:58.009 "transports": [ 00:13:58.009 { 00:13:58.009 "trtype": "TCP" 00:13:58.009 } 00:13:58.009 ] 00:13:58.009 } 00:13:58.009 ] 00:13:58.009 }' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.009 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.267 Malloc1 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.267 [2024-11-29 12:57:57.887644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:58.267 [2024-11-29 12:57:57.922332] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:13:58.267 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:58.267 could not add new controller: failed to write to nvme-fabrics device 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.267 12:57:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:59.641 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:59.641 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:59.641 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.641 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:59.641 12:57:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:01.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:01.540 [2024-11-29 12:58:01.244982] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:14:01.540 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:01.540 could not add new controller: failed to write to nvme-fabrics device 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.540 12:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:02.914 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:02.914 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:02.914 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.914 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:02.914 12:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:04.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.817 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.075 [2024-11-29 12:58:04.660511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.075 12:58:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.452 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:06.452 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:06.452 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.452 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:06.452 12:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.383 12:58:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.383 [2024-11-29 12:58:08.006699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.383 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:09.317 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:09.317 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:09.317 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.317 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:09.317 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:11.847 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:11.847 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:11.847 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.847 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:11.847 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.847 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:11.847 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.848 [2024-11-29 12:58:11.309939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.848 12:58:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:12.782 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.782 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:12.782 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.782 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:12.782 12:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:14.683 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:14.683 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:14.683 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.683 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:14.683 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.683 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:14.683 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.942 [2024-11-29 12:58:14.609858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.942 12:58:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:16.342 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.342 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:16.342 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.342 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:16.342 12:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.245 [2024-11-29 12:58:17.966131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.245 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.246 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:19.620 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:19.620 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:19.620 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.620 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:19.620 12:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.520 [2024-11-29 12:58:21.259583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.520 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.521 [2024-11-29 12:58:21.307686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.521 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 [2024-11-29 12:58:21.355832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 [2024-11-29 12:58:21.404010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 [2024-11-29 12:58:21.452176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.780 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:21.780 "tick_rate": 2300000000, 00:14:21.780 "poll_groups": [ 00:14:21.780 { 00:14:21.780 "name": "nvmf_tgt_poll_group_000", 00:14:21.780 "admin_qpairs": 2, 00:14:21.780 "io_qpairs": 168, 00:14:21.780 "current_admin_qpairs": 0, 00:14:21.780 "current_io_qpairs": 0, 00:14:21.780 "pending_bdev_io": 0, 00:14:21.780 "completed_nvme_io": 289, 00:14:21.780 "transports": [ 00:14:21.780 { 00:14:21.780 "trtype": "TCP" 00:14:21.780 } 00:14:21.780 ] 00:14:21.780 }, 00:14:21.780 { 00:14:21.780 "name": "nvmf_tgt_poll_group_001", 00:14:21.780 "admin_qpairs": 2, 00:14:21.780 "io_qpairs": 168, 00:14:21.780 "current_admin_qpairs": 0, 00:14:21.780 "current_io_qpairs": 0, 00:14:21.780 "pending_bdev_io": 0, 00:14:21.780 "completed_nvme_io": 247, 00:14:21.780 "transports": [ 00:14:21.780 { 00:14:21.780 "trtype": "TCP" 00:14:21.780 } 00:14:21.780 ] 00:14:21.780 }, 00:14:21.780 { 00:14:21.780 "name": "nvmf_tgt_poll_group_002", 00:14:21.780 "admin_qpairs": 1, 00:14:21.780 "io_qpairs": 168, 00:14:21.780 "current_admin_qpairs": 0, 00:14:21.780 "current_io_qpairs": 0, 00:14:21.781 "pending_bdev_io": 0, 00:14:21.781 "completed_nvme_io": 317, 00:14:21.781 "transports": [ 00:14:21.781 { 00:14:21.781 "trtype": "TCP" 00:14:21.781 } 00:14:21.781 ] 00:14:21.781 }, 00:14:21.781 { 00:14:21.781 "name": "nvmf_tgt_poll_group_003", 00:14:21.781 "admin_qpairs": 2, 00:14:21.781 "io_qpairs": 168, 00:14:21.781 "current_admin_qpairs": 0, 00:14:21.781 "current_io_qpairs": 0, 00:14:21.781 "pending_bdev_io": 0, 00:14:21.781 "completed_nvme_io": 169, 00:14:21.781 "transports": [ 00:14:21.781 { 00:14:21.781 "trtype": "TCP" 00:14:21.781 } 00:14:21.781 ] 00:14:21.781 } 00:14:21.781 ] 00:14:21.781 }' 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:21.781 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:22.040 rmmod nvme_tcp 00:14:22.040 rmmod nvme_fabrics 00:14:22.040 rmmod nvme_keyring 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1927769 ']' 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1927769 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1927769 ']' 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1927769 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1927769 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1927769' 00:14:22.040 killing process with pid 1927769 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1927769 00:14:22.040 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1927769 00:14:22.299 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:22.299 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:22.299 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:22.299 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:22.299 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:22.299 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:22.299 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:22.299 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:22.299 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:22.299 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.299 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.299 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.203 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:24.203 00:14:24.203 real 0m32.240s 00:14:24.203 user 1m39.028s 00:14:24.203 sys 0m5.973s 00:14:24.203 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.203 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.203 ************************************ 00:14:24.203 END TEST nvmf_rpc 00:14:24.203 ************************************ 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:24.462 ************************************ 00:14:24.462 START TEST nvmf_invalid 00:14:24.462 ************************************ 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:24.462 * Looking for test storage... 00:14:24.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:24.462 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:24.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.463 --rc genhtml_branch_coverage=1 00:14:24.463 --rc genhtml_function_coverage=1 00:14:24.463 --rc genhtml_legend=1 00:14:24.463 --rc geninfo_all_blocks=1 00:14:24.463 --rc geninfo_unexecuted_blocks=1 00:14:24.463 00:14:24.463 ' 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:24.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.463 --rc genhtml_branch_coverage=1 00:14:24.463 --rc genhtml_function_coverage=1 00:14:24.463 --rc genhtml_legend=1 00:14:24.463 --rc geninfo_all_blocks=1 00:14:24.463 --rc geninfo_unexecuted_blocks=1 00:14:24.463 00:14:24.463 ' 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:24.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.463 --rc genhtml_branch_coverage=1 00:14:24.463 --rc genhtml_function_coverage=1 00:14:24.463 --rc genhtml_legend=1 00:14:24.463 --rc geninfo_all_blocks=1 00:14:24.463 --rc geninfo_unexecuted_blocks=1 00:14:24.463 00:14:24.463 ' 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:24.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.463 --rc genhtml_branch_coverage=1 00:14:24.463 --rc genhtml_function_coverage=1 00:14:24.463 --rc genhtml_legend=1 00:14:24.463 --rc geninfo_all_blocks=1 00:14:24.463 --rc geninfo_unexecuted_blocks=1 00:14:24.463 00:14:24.463 ' 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:24.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:24.463 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.723 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.723 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.723 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:24.723 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:24.723 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:24.723 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:29.994 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:29.994 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:29.994 Found net devices under 0000:86:00.0: cvl_0_0 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:29.994 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:29.995 Found net devices under 0000:86:00.1: cvl_0_1 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:29.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:14:29.995 00:14:29.995 --- 10.0.0.2 ping statistics --- 00:14:29.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.995 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:14:29.995 00:14:29.995 --- 10.0.0.1 ping statistics --- 00:14:29.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.995 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1935555 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1935555 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1935555 ']' 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.995 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:30.254 [2024-11-29 12:58:29.850698] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:14:30.254 [2024-11-29 12:58:29.850747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.254 [2024-11-29 12:58:29.917691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.254 [2024-11-29 12:58:29.958356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.254 [2024-11-29 12:58:29.958398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.254 [2024-11-29 12:58:29.958406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.254 [2024-11-29 12:58:29.958412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.254 [2024-11-29 12:58:29.958417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.254 [2024-11-29 12:58:29.959928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.254 [2024-11-29 12:58:29.960046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.254 [2024-11-29 12:58:29.960067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.254 [2024-11-29 12:58:29.960069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.254 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.254 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:30.254 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:30.254 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:30.254 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:30.512 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.512 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:30.512 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24398 00:14:30.512 [2024-11-29 12:58:30.278922] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:30.512 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:30.512 { 00:14:30.512 "nqn": "nqn.2016-06.io.spdk:cnode24398", 00:14:30.512 "tgt_name": "foobar", 00:14:30.512 "method": "nvmf_create_subsystem", 00:14:30.512 "req_id": 1 00:14:30.512 } 00:14:30.512 Got JSON-RPC error response 00:14:30.512 response: 00:14:30.512 { 00:14:30.512 "code": -32603, 00:14:30.512 "message": "Unable to find target foobar" 00:14:30.512 }' 00:14:30.512 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:30.512 { 00:14:30.512 "nqn": "nqn.2016-06.io.spdk:cnode24398", 00:14:30.512 "tgt_name": "foobar", 00:14:30.512 "method": "nvmf_create_subsystem", 00:14:30.512 "req_id": 1 00:14:30.512 } 00:14:30.512 Got JSON-RPC error response 00:14:30.512 response: 00:14:30.512 { 00:14:30.512 "code": -32603, 00:14:30.512 "message": "Unable to find target foobar" 00:14:30.512 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:30.512 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:30.513 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25746 00:14:30.771 [2024-11-29 12:58:30.467569] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25746: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:30.771 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:30.771 { 00:14:30.771 "nqn": "nqn.2016-06.io.spdk:cnode25746", 00:14:30.771 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:30.771 "method": "nvmf_create_subsystem", 00:14:30.771 "req_id": 1 00:14:30.771 } 00:14:30.771 Got JSON-RPC error response 00:14:30.771 response: 00:14:30.771 { 00:14:30.771 "code": -32602, 00:14:30.771 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:30.771 }' 00:14:30.771 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:30.771 { 00:14:30.771 "nqn": "nqn.2016-06.io.spdk:cnode25746", 00:14:30.771 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:30.771 "method": "nvmf_create_subsystem", 00:14:30.771 "req_id": 1 00:14:30.771 } 00:14:30.771 Got JSON-RPC error response 00:14:30.771 response: 00:14:30.771 { 00:14:30.771 "code": -32602, 00:14:30.771 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:30.771 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:30.771 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:30.771 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6238 00:14:31.031 [2024-11-29 12:58:30.676278] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6238: invalid model number 'SPDK_Controller' 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:31.031 { 00:14:31.031 "nqn": "nqn.2016-06.io.spdk:cnode6238", 00:14:31.031 "model_number": "SPDK_Controller\u001f", 00:14:31.031 "method": "nvmf_create_subsystem", 00:14:31.031 "req_id": 1 00:14:31.031 } 00:14:31.031 Got JSON-RPC error response 00:14:31.031 response: 00:14:31.031 { 00:14:31.031 "code": -32602, 00:14:31.031 "message": "Invalid MN SPDK_Controller\u001f" 00:14:31.031 }' 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:31.031 { 00:14:31.031 "nqn": "nqn.2016-06.io.spdk:cnode6238", 00:14:31.031 "model_number": "SPDK_Controller\u001f", 00:14:31.031 "method": "nvmf_create_subsystem", 00:14:31.031 "req_id": 1 00:14:31.031 } 00:14:31.031 Got JSON-RPC error response 00:14:31.031 response: 00:14:31.031 { 00:14:31.031 "code": -32602, 00:14:31.031 "message": "Invalid MN SPDK_Controller\u001f" 00:14:31.031 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.031 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ c == \- ]] 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'c6ZjK}UbWH^38kH/9zt*' 00:14:31.032 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'c6ZjK}UbWH^38kH/9zt*' nqn.2016-06.io.spdk:cnode13042 00:14:31.292 [2024-11-29 12:58:31.025493] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13042: invalid serial number 'c6ZjK}UbWH^38kH/9zt*' 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:31.292 { 00:14:31.292 "nqn": "nqn.2016-06.io.spdk:cnode13042", 00:14:31.292 "serial_number": "c6ZjK}UbWH^\u007f38kH/9zt*", 00:14:31.292 "method": "nvmf_create_subsystem", 00:14:31.292 "req_id": 1 00:14:31.292 } 00:14:31.292 Got JSON-RPC error response 00:14:31.292 response: 00:14:31.292 { 00:14:31.292 "code": -32602, 00:14:31.292 "message": "Invalid SN c6ZjK}UbWH^\u007f38kH/9zt*" 00:14:31.292 }' 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:31.292 { 00:14:31.292 "nqn": "nqn.2016-06.io.spdk:cnode13042", 00:14:31.292 "serial_number": "c6ZjK}UbWH^\u007f38kH/9zt*", 00:14:31.292 "method": "nvmf_create_subsystem", 00:14:31.292 "req_id": 1 00:14:31.292 } 00:14:31.292 Got JSON-RPC error response 00:14:31.292 response: 00:14:31.292 { 00:14:31.292 "code": -32602, 00:14:31.292 "message": "Invalid SN c6ZjK}UbWH^\u007f38kH/9zt*" 00:14:31.292 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:31.292 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:31.552 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.553 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:14:31.554 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\K&W!R'\''Um/Z /dev/null' 00:14:33.886 12:58:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:36.421 00:14:36.421 real 0m11.553s 00:14:36.421 user 0m18.452s 00:14:36.421 sys 0m5.049s 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:36.421 ************************************ 00:14:36.421 END TEST nvmf_invalid 00:14:36.421 ************************************ 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:36.421 ************************************ 00:14:36.421 START TEST nvmf_connect_stress 00:14:36.421 ************************************ 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:36.421 * Looking for test storage... 00:14:36.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.421 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:36.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.422 --rc genhtml_branch_coverage=1 00:14:36.422 --rc genhtml_function_coverage=1 00:14:36.422 --rc genhtml_legend=1 00:14:36.422 --rc geninfo_all_blocks=1 00:14:36.422 --rc geninfo_unexecuted_blocks=1 00:14:36.422 00:14:36.422 ' 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:36.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.422 --rc genhtml_branch_coverage=1 00:14:36.422 --rc genhtml_function_coverage=1 00:14:36.422 --rc genhtml_legend=1 00:14:36.422 --rc geninfo_all_blocks=1 00:14:36.422 --rc geninfo_unexecuted_blocks=1 00:14:36.422 00:14:36.422 ' 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:36.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.422 --rc genhtml_branch_coverage=1 00:14:36.422 --rc genhtml_function_coverage=1 00:14:36.422 --rc genhtml_legend=1 00:14:36.422 --rc geninfo_all_blocks=1 00:14:36.422 --rc geninfo_unexecuted_blocks=1 00:14:36.422 00:14:36.422 ' 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:36.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.422 --rc genhtml_branch_coverage=1 00:14:36.422 --rc genhtml_function_coverage=1 00:14:36.422 --rc genhtml_legend=1 00:14:36.422 --rc geninfo_all_blocks=1 00:14:36.422 --rc geninfo_unexecuted_blocks=1 00:14:36.422 00:14:36.422 ' 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.422 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:36.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:36.423 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:41.696 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:41.696 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:41.696 Found net devices under 0000:86:00.0: cvl_0_0 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:41.696 Found net devices under 0000:86:00.1: cvl_0_1 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:41.696 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:41.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:14:41.697 00:14:41.697 --- 10.0.0.2 ping statistics --- 00:14:41.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.697 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:14:41.697 00:14:41.697 --- 10.0.0.1 ping statistics --- 00:14:41.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.697 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1939682 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1939682 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1939682 ']' 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.697 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.697 [2024-11-29 12:58:41.017637] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:14:41.697 [2024-11-29 12:58:41.017681] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.697 [2024-11-29 12:58:41.084853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:41.697 [2024-11-29 12:58:41.126781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.697 [2024-11-29 12:58:41.126819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.697 [2024-11-29 12:58:41.126832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.697 [2024-11-29 12:58:41.126840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.697 [2024-11-29 12:58:41.126846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.697 [2024-11-29 12:58:41.128273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.697 [2024-11-29 12:58:41.128360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.697 [2024-11-29 12:58:41.128363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.697 [2024-11-29 12:58:41.266279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.697 [2024-11-29 12:58:41.286491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.697 NULL1 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1939741 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.697 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.698 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.956 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.956 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:41.956 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.956 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.956 12:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.523 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.523 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:42.523 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.523 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.523 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.782 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.782 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:42.782 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.782 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.782 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.040 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.040 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:43.040 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.040 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.040 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.299 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.299 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:43.299 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.299 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.299 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.558 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.558 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:43.558 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.558 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.558 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.125 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.125 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:44.125 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.125 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.125 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.384 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.384 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:44.384 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.384 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.384 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:44.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.643 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.902 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.902 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:44.902 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.902 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.902 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:45.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.160 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.726 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.726 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:45.726 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.726 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.726 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.985 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.985 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:45.985 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.985 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.985 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.243 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.243 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:46.243 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.243 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.243 12:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.502 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.502 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:46.502 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.502 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.502 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.069 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.069 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:47.069 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.069 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.069 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.328 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.328 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:47.328 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.328 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.328 12:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.585 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.585 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:47.585 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.585 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.585 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.843 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.843 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:47.843 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.843 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.843 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.101 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.101 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:48.101 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.101 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.101 12:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.668 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.668 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:48.668 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.668 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.668 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.927 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.927 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:48.927 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.927 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.927 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.186 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.186 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:49.186 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.186 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.186 12:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.445 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.445 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:49.445 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.445 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.445 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.012 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.012 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:50.012 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.012 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.012 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.271 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.271 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:50.271 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.271 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.271 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.529 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.529 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:50.529 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.529 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.529 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.786 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.786 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:50.786 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.786 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.786 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.045 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.045 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:51.045 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.045 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.045 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.611 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.611 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:51.611 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.611 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.611 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.871 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1939741 00:14:51.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1939741) - No such process 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1939741 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:51.871 rmmod nvme_tcp 00:14:51.871 rmmod nvme_fabrics 00:14:51.871 rmmod nvme_keyring 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1939682 ']' 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1939682 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1939682 ']' 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1939682 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1939682 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1939682' 00:14:51.871 killing process with pid 1939682 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1939682 00:14:51.871 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1939682 00:14:52.131 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:52.131 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:52.131 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:52.131 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:52.131 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:52.131 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:52.131 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:52.131 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:52.131 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:52.131 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.131 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.131 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.035 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:54.035 00:14:54.035 real 0m18.132s 00:14:54.035 user 0m38.963s 00:14:54.035 sys 0m8.085s 00:14:54.035 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.035 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.035 ************************************ 00:14:54.035 END TEST nvmf_connect_stress 00:14:54.035 ************************************ 00:14:54.293 12:58:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:54.293 12:58:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:54.293 12:58:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.293 12:58:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:54.293 ************************************ 00:14:54.293 START TEST nvmf_fused_ordering 00:14:54.293 ************************************ 00:14:54.293 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:54.293 * Looking for test storage... 00:14:54.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.293 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:54.293 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:54.293 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:54.293 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:54.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.294 --rc genhtml_branch_coverage=1 00:14:54.294 --rc genhtml_function_coverage=1 00:14:54.294 --rc genhtml_legend=1 00:14:54.294 --rc geninfo_all_blocks=1 00:14:54.294 --rc geninfo_unexecuted_blocks=1 00:14:54.294 00:14:54.294 ' 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:54.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.294 --rc genhtml_branch_coverage=1 00:14:54.294 --rc genhtml_function_coverage=1 00:14:54.294 --rc genhtml_legend=1 00:14:54.294 --rc geninfo_all_blocks=1 00:14:54.294 --rc geninfo_unexecuted_blocks=1 00:14:54.294 00:14:54.294 ' 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:54.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.294 --rc genhtml_branch_coverage=1 00:14:54.294 --rc genhtml_function_coverage=1 00:14:54.294 --rc genhtml_legend=1 00:14:54.294 --rc geninfo_all_blocks=1 00:14:54.294 --rc geninfo_unexecuted_blocks=1 00:14:54.294 00:14:54.294 ' 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:54.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.294 --rc genhtml_branch_coverage=1 00:14:54.294 --rc genhtml_function_coverage=1 00:14:54.294 --rc genhtml_legend=1 00:14:54.294 --rc geninfo_all_blocks=1 00:14:54.294 --rc geninfo_unexecuted_blocks=1 00:14:54.294 00:14:54.294 ' 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:54.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:54.294 12:58:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:59.657 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:59.658 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:59.658 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:59.658 Found net devices under 0000:86:00.0: cvl_0_0 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:59.658 Found net devices under 0000:86:00.1: cvl_0_1 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:59.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:14:59.658 00:14:59.658 --- 10.0.0.2 ping statistics --- 00:14:59.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.658 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:59.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:14:59.658 00:14:59.658 --- 10.0.0.1 ping statistics --- 00:14:59.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.658 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.658 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1944898 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1944898 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1944898 ']' 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.659 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.659 [2024-11-29 12:58:59.414834] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:14:59.659 [2024-11-29 12:58:59.414879] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.969 [2024-11-29 12:58:59.479875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.969 [2024-11-29 12:58:59.519527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.969 [2024-11-29 12:58:59.519560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.969 [2024-11-29 12:58:59.519571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.969 [2024-11-29 12:58:59.519579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.969 [2024-11-29 12:58:59.519586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.969 [2024-11-29 12:58:59.520209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.969 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.969 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:59.969 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:59.969 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:59.969 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.969 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.969 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:59.969 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.969 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.969 [2024-11-29 12:58:59.656107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.970 [2024-11-29 12:58:59.672323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.970 NULL1 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.970 12:58:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:59.970 [2024-11-29 12:58:59.727023] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:14:59.970 [2024-11-29 12:58:59.727053] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1944920 ] 00:15:00.290 Attached to nqn.2016-06.io.spdk:cnode1 00:15:00.290 Namespace ID: 1 size: 1GB 00:15:00.290 fused_ordering(0) 00:15:00.290 fused_ordering(1) 00:15:00.290 fused_ordering(2) 00:15:00.290 fused_ordering(3) 00:15:00.290 fused_ordering(4) 00:15:00.290 fused_ordering(5) 00:15:00.290 fused_ordering(6) 00:15:00.290 fused_ordering(7) 00:15:00.290 fused_ordering(8) 00:15:00.290 fused_ordering(9) 00:15:00.290 fused_ordering(10) 00:15:00.290 fused_ordering(11) 00:15:00.290 fused_ordering(12) 00:15:00.290 fused_ordering(13) 00:15:00.290 fused_ordering(14) 00:15:00.290 fused_ordering(15) 00:15:00.290 fused_ordering(16) 00:15:00.290 fused_ordering(17) 00:15:00.290 fused_ordering(18) 00:15:00.290 fused_ordering(19) 00:15:00.290 fused_ordering(20) 00:15:00.290 fused_ordering(21) 00:15:00.290 fused_ordering(22) 00:15:00.290 fused_ordering(23) 00:15:00.290 fused_ordering(24) 00:15:00.290 fused_ordering(25) 00:15:00.290 fused_ordering(26) 00:15:00.290 fused_ordering(27) 00:15:00.290 fused_ordering(28) 00:15:00.290 fused_ordering(29) 00:15:00.290 fused_ordering(30) 00:15:00.290 fused_ordering(31) 00:15:00.290 fused_ordering(32) 00:15:00.290 fused_ordering(33) 00:15:00.290 fused_ordering(34) 00:15:00.290 fused_ordering(35) 00:15:00.290 fused_ordering(36) 00:15:00.290 fused_ordering(37) 00:15:00.290 fused_ordering(38) 00:15:00.290 fused_ordering(39) 00:15:00.290 fused_ordering(40) 00:15:00.290 fused_ordering(41) 00:15:00.290 fused_ordering(42) 00:15:00.290 fused_ordering(43) 00:15:00.290 fused_ordering(44) 00:15:00.290 fused_ordering(45) 00:15:00.291 fused_ordering(46) 00:15:00.291 fused_ordering(47) 00:15:00.291 fused_ordering(48) 00:15:00.291 fused_ordering(49) 00:15:00.291 fused_ordering(50) 00:15:00.291 fused_ordering(51) 00:15:00.291 fused_ordering(52) 00:15:00.291 fused_ordering(53) 00:15:00.291 fused_ordering(54) 00:15:00.291 fused_ordering(55) 00:15:00.291 fused_ordering(56) 00:15:00.291 fused_ordering(57) 00:15:00.291 fused_ordering(58) 00:15:00.291 fused_ordering(59) 00:15:00.291 fused_ordering(60) 00:15:00.291 fused_ordering(61) 00:15:00.291 fused_ordering(62) 00:15:00.291 fused_ordering(63) 00:15:00.291 fused_ordering(64) 00:15:00.291 fused_ordering(65) 00:15:00.291 fused_ordering(66) 00:15:00.291 fused_ordering(67) 00:15:00.291 fused_ordering(68) 00:15:00.291 fused_ordering(69) 00:15:00.291 fused_ordering(70) 00:15:00.291 fused_ordering(71) 00:15:00.291 fused_ordering(72) 00:15:00.291 fused_ordering(73) 00:15:00.291 fused_ordering(74) 00:15:00.291 fused_ordering(75) 00:15:00.291 fused_ordering(76) 00:15:00.291 fused_ordering(77) 00:15:00.291 fused_ordering(78) 00:15:00.291 fused_ordering(79) 00:15:00.291 fused_ordering(80) 00:15:00.291 fused_ordering(81) 00:15:00.291 fused_ordering(82) 00:15:00.291 fused_ordering(83) 00:15:00.291 fused_ordering(84) 00:15:00.291 fused_ordering(85) 00:15:00.291 fused_ordering(86) 00:15:00.291 fused_ordering(87) 00:15:00.291 fused_ordering(88) 00:15:00.291 fused_ordering(89) 00:15:00.291 fused_ordering(90) 00:15:00.291 fused_ordering(91) 00:15:00.291 fused_ordering(92) 00:15:00.291 fused_ordering(93) 00:15:00.291 fused_ordering(94) 00:15:00.291 fused_ordering(95) 00:15:00.291 fused_ordering(96) 00:15:00.291 fused_ordering(97) 00:15:00.291 fused_ordering(98) 00:15:00.291 fused_ordering(99) 00:15:00.291 fused_ordering(100) 00:15:00.291 fused_ordering(101) 00:15:00.291 fused_ordering(102) 00:15:00.291 fused_ordering(103) 00:15:00.291 fused_ordering(104) 00:15:00.291 fused_ordering(105) 00:15:00.291 fused_ordering(106) 00:15:00.291 fused_ordering(107) 00:15:00.291 fused_ordering(108) 00:15:00.291 fused_ordering(109) 00:15:00.291 fused_ordering(110) 00:15:00.291 fused_ordering(111) 00:15:00.291 fused_ordering(112) 00:15:00.291 fused_ordering(113) 00:15:00.291 fused_ordering(114) 00:15:00.291 fused_ordering(115) 00:15:00.291 fused_ordering(116) 00:15:00.291 fused_ordering(117) 00:15:00.291 fused_ordering(118) 00:15:00.291 fused_ordering(119) 00:15:00.291 fused_ordering(120) 00:15:00.291 fused_ordering(121) 00:15:00.291 fused_ordering(122) 00:15:00.291 fused_ordering(123) 00:15:00.291 fused_ordering(124) 00:15:00.291 fused_ordering(125) 00:15:00.291 fused_ordering(126) 00:15:00.291 fused_ordering(127) 00:15:00.291 fused_ordering(128) 00:15:00.291 fused_ordering(129) 00:15:00.291 fused_ordering(130) 00:15:00.291 fused_ordering(131) 00:15:00.291 fused_ordering(132) 00:15:00.291 fused_ordering(133) 00:15:00.291 fused_ordering(134) 00:15:00.291 fused_ordering(135) 00:15:00.291 fused_ordering(136) 00:15:00.291 fused_ordering(137) 00:15:00.291 fused_ordering(138) 00:15:00.291 fused_ordering(139) 00:15:00.291 fused_ordering(140) 00:15:00.291 fused_ordering(141) 00:15:00.291 fused_ordering(142) 00:15:00.291 fused_ordering(143) 00:15:00.291 fused_ordering(144) 00:15:00.291 fused_ordering(145) 00:15:00.291 fused_ordering(146) 00:15:00.291 fused_ordering(147) 00:15:00.291 fused_ordering(148) 00:15:00.291 fused_ordering(149) 00:15:00.291 fused_ordering(150) 00:15:00.291 fused_ordering(151) 00:15:00.291 fused_ordering(152) 00:15:00.291 fused_ordering(153) 00:15:00.291 fused_ordering(154) 00:15:00.291 fused_ordering(155) 00:15:00.291 fused_ordering(156) 00:15:00.291 fused_ordering(157) 00:15:00.291 fused_ordering(158) 00:15:00.291 fused_ordering(159) 00:15:00.291 fused_ordering(160) 00:15:00.291 fused_ordering(161) 00:15:00.291 fused_ordering(162) 00:15:00.291 fused_ordering(163) 00:15:00.291 fused_ordering(164) 00:15:00.291 fused_ordering(165) 00:15:00.291 fused_ordering(166) 00:15:00.291 fused_ordering(167) 00:15:00.291 fused_ordering(168) 00:15:00.291 fused_ordering(169) 00:15:00.291 fused_ordering(170) 00:15:00.291 fused_ordering(171) 00:15:00.291 fused_ordering(172) 00:15:00.291 fused_ordering(173) 00:15:00.291 fused_ordering(174) 00:15:00.291 fused_ordering(175) 00:15:00.291 fused_ordering(176) 00:15:00.291 fused_ordering(177) 00:15:00.291 fused_ordering(178) 00:15:00.291 fused_ordering(179) 00:15:00.291 fused_ordering(180) 00:15:00.291 fused_ordering(181) 00:15:00.291 fused_ordering(182) 00:15:00.291 fused_ordering(183) 00:15:00.291 fused_ordering(184) 00:15:00.291 fused_ordering(185) 00:15:00.291 fused_ordering(186) 00:15:00.291 fused_ordering(187) 00:15:00.291 fused_ordering(188) 00:15:00.291 fused_ordering(189) 00:15:00.291 fused_ordering(190) 00:15:00.291 fused_ordering(191) 00:15:00.291 fused_ordering(192) 00:15:00.291 fused_ordering(193) 00:15:00.291 fused_ordering(194) 00:15:00.291 fused_ordering(195) 00:15:00.291 fused_ordering(196) 00:15:00.291 fused_ordering(197) 00:15:00.291 fused_ordering(198) 00:15:00.291 fused_ordering(199) 00:15:00.291 fused_ordering(200) 00:15:00.291 fused_ordering(201) 00:15:00.291 fused_ordering(202) 00:15:00.291 fused_ordering(203) 00:15:00.291 fused_ordering(204) 00:15:00.291 fused_ordering(205) 00:15:00.551 fused_ordering(206) 00:15:00.551 fused_ordering(207) 00:15:00.551 fused_ordering(208) 00:15:00.551 fused_ordering(209) 00:15:00.551 fused_ordering(210) 00:15:00.551 fused_ordering(211) 00:15:00.551 fused_ordering(212) 00:15:00.551 fused_ordering(213) 00:15:00.551 fused_ordering(214) 00:15:00.551 fused_ordering(215) 00:15:00.551 fused_ordering(216) 00:15:00.551 fused_ordering(217) 00:15:00.551 fused_ordering(218) 00:15:00.551 fused_ordering(219) 00:15:00.551 fused_ordering(220) 00:15:00.551 fused_ordering(221) 00:15:00.551 fused_ordering(222) 00:15:00.551 fused_ordering(223) 00:15:00.551 fused_ordering(224) 00:15:00.551 fused_ordering(225) 00:15:00.551 fused_ordering(226) 00:15:00.551 fused_ordering(227) 00:15:00.551 fused_ordering(228) 00:15:00.551 fused_ordering(229) 00:15:00.551 fused_ordering(230) 00:15:00.551 fused_ordering(231) 00:15:00.551 fused_ordering(232) 00:15:00.551 fused_ordering(233) 00:15:00.551 fused_ordering(234) 00:15:00.551 fused_ordering(235) 00:15:00.551 fused_ordering(236) 00:15:00.551 fused_ordering(237) 00:15:00.551 fused_ordering(238) 00:15:00.551 fused_ordering(239) 00:15:00.551 fused_ordering(240) 00:15:00.551 fused_ordering(241) 00:15:00.551 fused_ordering(242) 00:15:00.551 fused_ordering(243) 00:15:00.551 fused_ordering(244) 00:15:00.551 fused_ordering(245) 00:15:00.551 fused_ordering(246) 00:15:00.551 fused_ordering(247) 00:15:00.551 fused_ordering(248) 00:15:00.551 fused_ordering(249) 00:15:00.551 fused_ordering(250) 00:15:00.551 fused_ordering(251) 00:15:00.551 fused_ordering(252) 00:15:00.551 fused_ordering(253) 00:15:00.551 fused_ordering(254) 00:15:00.551 fused_ordering(255) 00:15:00.551 fused_ordering(256) 00:15:00.551 fused_ordering(257) 00:15:00.551 fused_ordering(258) 00:15:00.551 fused_ordering(259) 00:15:00.551 fused_ordering(260) 00:15:00.551 fused_ordering(261) 00:15:00.551 fused_ordering(262) 00:15:00.551 fused_ordering(263) 00:15:00.551 fused_ordering(264) 00:15:00.551 fused_ordering(265) 00:15:00.551 fused_ordering(266) 00:15:00.551 fused_ordering(267) 00:15:00.551 fused_ordering(268) 00:15:00.551 fused_ordering(269) 00:15:00.551 fused_ordering(270) 00:15:00.551 fused_ordering(271) 00:15:00.551 fused_ordering(272) 00:15:00.551 fused_ordering(273) 00:15:00.551 fused_ordering(274) 00:15:00.551 fused_ordering(275) 00:15:00.551 fused_ordering(276) 00:15:00.551 fused_ordering(277) 00:15:00.551 fused_ordering(278) 00:15:00.551 fused_ordering(279) 00:15:00.551 fused_ordering(280) 00:15:00.551 fused_ordering(281) 00:15:00.551 fused_ordering(282) 00:15:00.551 fused_ordering(283) 00:15:00.551 fused_ordering(284) 00:15:00.551 fused_ordering(285) 00:15:00.551 fused_ordering(286) 00:15:00.551 fused_ordering(287) 00:15:00.551 fused_ordering(288) 00:15:00.551 fused_ordering(289) 00:15:00.551 fused_ordering(290) 00:15:00.551 fused_ordering(291) 00:15:00.551 fused_ordering(292) 00:15:00.551 fused_ordering(293) 00:15:00.551 fused_ordering(294) 00:15:00.551 fused_ordering(295) 00:15:00.551 fused_ordering(296) 00:15:00.551 fused_ordering(297) 00:15:00.551 fused_ordering(298) 00:15:00.551 fused_ordering(299) 00:15:00.551 fused_ordering(300) 00:15:00.551 fused_ordering(301) 00:15:00.551 fused_ordering(302) 00:15:00.551 fused_ordering(303) 00:15:00.551 fused_ordering(304) 00:15:00.551 fused_ordering(305) 00:15:00.551 fused_ordering(306) 00:15:00.551 fused_ordering(307) 00:15:00.551 fused_ordering(308) 00:15:00.551 fused_ordering(309) 00:15:00.551 fused_ordering(310) 00:15:00.551 fused_ordering(311) 00:15:00.551 fused_ordering(312) 00:15:00.551 fused_ordering(313) 00:15:00.551 fused_ordering(314) 00:15:00.551 fused_ordering(315) 00:15:00.551 fused_ordering(316) 00:15:00.551 fused_ordering(317) 00:15:00.551 fused_ordering(318) 00:15:00.551 fused_ordering(319) 00:15:00.551 fused_ordering(320) 00:15:00.551 fused_ordering(321) 00:15:00.551 fused_ordering(322) 00:15:00.551 fused_ordering(323) 00:15:00.551 fused_ordering(324) 00:15:00.551 fused_ordering(325) 00:15:00.551 fused_ordering(326) 00:15:00.551 fused_ordering(327) 00:15:00.551 fused_ordering(328) 00:15:00.551 fused_ordering(329) 00:15:00.551 fused_ordering(330) 00:15:00.551 fused_ordering(331) 00:15:00.551 fused_ordering(332) 00:15:00.551 fused_ordering(333) 00:15:00.551 fused_ordering(334) 00:15:00.551 fused_ordering(335) 00:15:00.551 fused_ordering(336) 00:15:00.551 fused_ordering(337) 00:15:00.551 fused_ordering(338) 00:15:00.551 fused_ordering(339) 00:15:00.551 fused_ordering(340) 00:15:00.551 fused_ordering(341) 00:15:00.551 fused_ordering(342) 00:15:00.551 fused_ordering(343) 00:15:00.551 fused_ordering(344) 00:15:00.551 fused_ordering(345) 00:15:00.551 fused_ordering(346) 00:15:00.551 fused_ordering(347) 00:15:00.551 fused_ordering(348) 00:15:00.551 fused_ordering(349) 00:15:00.551 fused_ordering(350) 00:15:00.551 fused_ordering(351) 00:15:00.551 fused_ordering(352) 00:15:00.551 fused_ordering(353) 00:15:00.551 fused_ordering(354) 00:15:00.551 fused_ordering(355) 00:15:00.551 fused_ordering(356) 00:15:00.551 fused_ordering(357) 00:15:00.551 fused_ordering(358) 00:15:00.551 fused_ordering(359) 00:15:00.551 fused_ordering(360) 00:15:00.551 fused_ordering(361) 00:15:00.551 fused_ordering(362) 00:15:00.551 fused_ordering(363) 00:15:00.551 fused_ordering(364) 00:15:00.551 fused_ordering(365) 00:15:00.551 fused_ordering(366) 00:15:00.551 fused_ordering(367) 00:15:00.551 fused_ordering(368) 00:15:00.551 fused_ordering(369) 00:15:00.551 fused_ordering(370) 00:15:00.551 fused_ordering(371) 00:15:00.551 fused_ordering(372) 00:15:00.551 fused_ordering(373) 00:15:00.551 fused_ordering(374) 00:15:00.551 fused_ordering(375) 00:15:00.551 fused_ordering(376) 00:15:00.551 fused_ordering(377) 00:15:00.551 fused_ordering(378) 00:15:00.551 fused_ordering(379) 00:15:00.551 fused_ordering(380) 00:15:00.551 fused_ordering(381) 00:15:00.551 fused_ordering(382) 00:15:00.551 fused_ordering(383) 00:15:00.551 fused_ordering(384) 00:15:00.551 fused_ordering(385) 00:15:00.551 fused_ordering(386) 00:15:00.551 fused_ordering(387) 00:15:00.551 fused_ordering(388) 00:15:00.551 fused_ordering(389) 00:15:00.551 fused_ordering(390) 00:15:00.551 fused_ordering(391) 00:15:00.551 fused_ordering(392) 00:15:00.551 fused_ordering(393) 00:15:00.551 fused_ordering(394) 00:15:00.551 fused_ordering(395) 00:15:00.551 fused_ordering(396) 00:15:00.551 fused_ordering(397) 00:15:00.551 fused_ordering(398) 00:15:00.551 fused_ordering(399) 00:15:00.551 fused_ordering(400) 00:15:00.551 fused_ordering(401) 00:15:00.551 fused_ordering(402) 00:15:00.551 fused_ordering(403) 00:15:00.551 fused_ordering(404) 00:15:00.551 fused_ordering(405) 00:15:00.551 fused_ordering(406) 00:15:00.551 fused_ordering(407) 00:15:00.551 fused_ordering(408) 00:15:00.551 fused_ordering(409) 00:15:00.551 fused_ordering(410) 00:15:00.810 fused_ordering(411) 00:15:00.810 fused_ordering(412) 00:15:00.810 fused_ordering(413) 00:15:00.810 fused_ordering(414) 00:15:00.810 fused_ordering(415) 00:15:00.810 fused_ordering(416) 00:15:00.810 fused_ordering(417) 00:15:00.810 fused_ordering(418) 00:15:00.810 fused_ordering(419) 00:15:00.810 fused_ordering(420) 00:15:00.810 fused_ordering(421) 00:15:00.810 fused_ordering(422) 00:15:00.810 fused_ordering(423) 00:15:00.810 fused_ordering(424) 00:15:00.811 fused_ordering(425) 00:15:00.811 fused_ordering(426) 00:15:00.811 fused_ordering(427) 00:15:00.811 fused_ordering(428) 00:15:00.811 fused_ordering(429) 00:15:00.811 fused_ordering(430) 00:15:00.811 fused_ordering(431) 00:15:00.811 fused_ordering(432) 00:15:00.811 fused_ordering(433) 00:15:00.811 fused_ordering(434) 00:15:00.811 fused_ordering(435) 00:15:00.811 fused_ordering(436) 00:15:00.811 fused_ordering(437) 00:15:00.811 fused_ordering(438) 00:15:00.811 fused_ordering(439) 00:15:00.811 fused_ordering(440) 00:15:00.811 fused_ordering(441) 00:15:00.811 fused_ordering(442) 00:15:00.811 fused_ordering(443) 00:15:00.811 fused_ordering(444) 00:15:00.811 fused_ordering(445) 00:15:00.811 fused_ordering(446) 00:15:00.811 fused_ordering(447) 00:15:00.811 fused_ordering(448) 00:15:00.811 fused_ordering(449) 00:15:00.811 fused_ordering(450) 00:15:00.811 fused_ordering(451) 00:15:00.811 fused_ordering(452) 00:15:00.811 fused_ordering(453) 00:15:00.811 fused_ordering(454) 00:15:00.811 fused_ordering(455) 00:15:00.811 fused_ordering(456) 00:15:00.811 fused_ordering(457) 00:15:00.811 fused_ordering(458) 00:15:00.811 fused_ordering(459) 00:15:00.811 fused_ordering(460) 00:15:00.811 fused_ordering(461) 00:15:00.811 fused_ordering(462) 00:15:00.811 fused_ordering(463) 00:15:00.811 fused_ordering(464) 00:15:00.811 fused_ordering(465) 00:15:00.811 fused_ordering(466) 00:15:00.811 fused_ordering(467) 00:15:00.811 fused_ordering(468) 00:15:00.811 fused_ordering(469) 00:15:00.811 fused_ordering(470) 00:15:00.811 fused_ordering(471) 00:15:00.811 fused_ordering(472) 00:15:00.811 fused_ordering(473) 00:15:00.811 fused_ordering(474) 00:15:00.811 fused_ordering(475) 00:15:00.811 fused_ordering(476) 00:15:00.811 fused_ordering(477) 00:15:00.811 fused_ordering(478) 00:15:00.811 fused_ordering(479) 00:15:00.811 fused_ordering(480) 00:15:00.811 fused_ordering(481) 00:15:00.811 fused_ordering(482) 00:15:00.811 fused_ordering(483) 00:15:00.811 fused_ordering(484) 00:15:00.811 fused_ordering(485) 00:15:00.811 fused_ordering(486) 00:15:00.811 fused_ordering(487) 00:15:00.811 fused_ordering(488) 00:15:00.811 fused_ordering(489) 00:15:00.811 fused_ordering(490) 00:15:00.811 fused_ordering(491) 00:15:00.811 fused_ordering(492) 00:15:00.811 fused_ordering(493) 00:15:00.811 fused_ordering(494) 00:15:00.811 fused_ordering(495) 00:15:00.811 fused_ordering(496) 00:15:00.811 fused_ordering(497) 00:15:00.811 fused_ordering(498) 00:15:00.811 fused_ordering(499) 00:15:00.811 fused_ordering(500) 00:15:00.811 fused_ordering(501) 00:15:00.811 fused_ordering(502) 00:15:00.811 fused_ordering(503) 00:15:00.811 fused_ordering(504) 00:15:00.811 fused_ordering(505) 00:15:00.811 fused_ordering(506) 00:15:00.811 fused_ordering(507) 00:15:00.811 fused_ordering(508) 00:15:00.811 fused_ordering(509) 00:15:00.811 fused_ordering(510) 00:15:00.811 fused_ordering(511) 00:15:00.811 fused_ordering(512) 00:15:00.811 fused_ordering(513) 00:15:00.811 fused_ordering(514) 00:15:00.811 fused_ordering(515) 00:15:00.811 fused_ordering(516) 00:15:00.811 fused_ordering(517) 00:15:00.811 fused_ordering(518) 00:15:00.811 fused_ordering(519) 00:15:00.811 fused_ordering(520) 00:15:00.811 fused_ordering(521) 00:15:00.811 fused_ordering(522) 00:15:00.811 fused_ordering(523) 00:15:00.811 fused_ordering(524) 00:15:00.811 fused_ordering(525) 00:15:00.811 fused_ordering(526) 00:15:00.811 fused_ordering(527) 00:15:00.811 fused_ordering(528) 00:15:00.811 fused_ordering(529) 00:15:00.811 fused_ordering(530) 00:15:00.811 fused_ordering(531) 00:15:00.811 fused_ordering(532) 00:15:00.811 fused_ordering(533) 00:15:00.811 fused_ordering(534) 00:15:00.811 fused_ordering(535) 00:15:00.811 fused_ordering(536) 00:15:00.811 fused_ordering(537) 00:15:00.811 fused_ordering(538) 00:15:00.811 fused_ordering(539) 00:15:00.811 fused_ordering(540) 00:15:00.811 fused_ordering(541) 00:15:00.811 fused_ordering(542) 00:15:00.811 fused_ordering(543) 00:15:00.811 fused_ordering(544) 00:15:00.811 fused_ordering(545) 00:15:00.811 fused_ordering(546) 00:15:00.811 fused_ordering(547) 00:15:00.811 fused_ordering(548) 00:15:00.811 fused_ordering(549) 00:15:00.811 fused_ordering(550) 00:15:00.811 fused_ordering(551) 00:15:00.811 fused_ordering(552) 00:15:00.811 fused_ordering(553) 00:15:00.811 fused_ordering(554) 00:15:00.811 fused_ordering(555) 00:15:00.811 fused_ordering(556) 00:15:00.811 fused_ordering(557) 00:15:00.811 fused_ordering(558) 00:15:00.811 fused_ordering(559) 00:15:00.811 fused_ordering(560) 00:15:00.811 fused_ordering(561) 00:15:00.811 fused_ordering(562) 00:15:00.811 fused_ordering(563) 00:15:00.811 fused_ordering(564) 00:15:00.811 fused_ordering(565) 00:15:00.811 fused_ordering(566) 00:15:00.811 fused_ordering(567) 00:15:00.811 fused_ordering(568) 00:15:00.811 fused_ordering(569) 00:15:00.811 fused_ordering(570) 00:15:00.811 fused_ordering(571) 00:15:00.811 fused_ordering(572) 00:15:00.811 fused_ordering(573) 00:15:00.811 fused_ordering(574) 00:15:00.811 fused_ordering(575) 00:15:00.811 fused_ordering(576) 00:15:00.811 fused_ordering(577) 00:15:00.811 fused_ordering(578) 00:15:00.811 fused_ordering(579) 00:15:00.811 fused_ordering(580) 00:15:00.811 fused_ordering(581) 00:15:00.811 fused_ordering(582) 00:15:00.811 fused_ordering(583) 00:15:00.811 fused_ordering(584) 00:15:00.811 fused_ordering(585) 00:15:00.811 fused_ordering(586) 00:15:00.811 fused_ordering(587) 00:15:00.811 fused_ordering(588) 00:15:00.811 fused_ordering(589) 00:15:00.811 fused_ordering(590) 00:15:00.811 fused_ordering(591) 00:15:00.811 fused_ordering(592) 00:15:00.811 fused_ordering(593) 00:15:00.811 fused_ordering(594) 00:15:00.811 fused_ordering(595) 00:15:00.811 fused_ordering(596) 00:15:00.811 fused_ordering(597) 00:15:00.811 fused_ordering(598) 00:15:00.811 fused_ordering(599) 00:15:00.811 fused_ordering(600) 00:15:00.811 fused_ordering(601) 00:15:00.811 fused_ordering(602) 00:15:00.811 fused_ordering(603) 00:15:00.811 fused_ordering(604) 00:15:00.811 fused_ordering(605) 00:15:00.811 fused_ordering(606) 00:15:00.811 fused_ordering(607) 00:15:00.811 fused_ordering(608) 00:15:00.811 fused_ordering(609) 00:15:00.811 fused_ordering(610) 00:15:00.811 fused_ordering(611) 00:15:00.811 fused_ordering(612) 00:15:00.811 fused_ordering(613) 00:15:00.811 fused_ordering(614) 00:15:00.811 fused_ordering(615) 00:15:01.379 fused_ordering(616) 00:15:01.379 fused_ordering(617) 00:15:01.379 fused_ordering(618) 00:15:01.379 fused_ordering(619) 00:15:01.379 fused_ordering(620) 00:15:01.379 fused_ordering(621) 00:15:01.379 fused_ordering(622) 00:15:01.379 fused_ordering(623) 00:15:01.379 fused_ordering(624) 00:15:01.379 fused_ordering(625) 00:15:01.379 fused_ordering(626) 00:15:01.379 fused_ordering(627) 00:15:01.379 fused_ordering(628) 00:15:01.379 fused_ordering(629) 00:15:01.379 fused_ordering(630) 00:15:01.379 fused_ordering(631) 00:15:01.379 fused_ordering(632) 00:15:01.379 fused_ordering(633) 00:15:01.379 fused_ordering(634) 00:15:01.379 fused_ordering(635) 00:15:01.379 fused_ordering(636) 00:15:01.379 fused_ordering(637) 00:15:01.379 fused_ordering(638) 00:15:01.379 fused_ordering(639) 00:15:01.379 fused_ordering(640) 00:15:01.379 fused_ordering(641) 00:15:01.379 fused_ordering(642) 00:15:01.379 fused_ordering(643) 00:15:01.379 fused_ordering(644) 00:15:01.379 fused_ordering(645) 00:15:01.379 fused_ordering(646) 00:15:01.379 fused_ordering(647) 00:15:01.379 fused_ordering(648) 00:15:01.379 fused_ordering(649) 00:15:01.379 fused_ordering(650) 00:15:01.379 fused_ordering(651) 00:15:01.379 fused_ordering(652) 00:15:01.379 fused_ordering(653) 00:15:01.379 fused_ordering(654) 00:15:01.379 fused_ordering(655) 00:15:01.379 fused_ordering(656) 00:15:01.379 fused_ordering(657) 00:15:01.379 fused_ordering(658) 00:15:01.379 fused_ordering(659) 00:15:01.379 fused_ordering(660) 00:15:01.379 fused_ordering(661) 00:15:01.379 fused_ordering(662) 00:15:01.379 fused_ordering(663) 00:15:01.380 fused_ordering(664) 00:15:01.380 fused_ordering(665) 00:15:01.380 fused_ordering(666) 00:15:01.380 fused_ordering(667) 00:15:01.380 fused_ordering(668) 00:15:01.380 fused_ordering(669) 00:15:01.380 fused_ordering(670) 00:15:01.380 fused_ordering(671) 00:15:01.380 fused_ordering(672) 00:15:01.380 fused_ordering(673) 00:15:01.380 fused_ordering(674) 00:15:01.380 fused_ordering(675) 00:15:01.380 fused_ordering(676) 00:15:01.380 fused_ordering(677) 00:15:01.380 fused_ordering(678) 00:15:01.380 fused_ordering(679) 00:15:01.380 fused_ordering(680) 00:15:01.380 fused_ordering(681) 00:15:01.380 fused_ordering(682) 00:15:01.380 fused_ordering(683) 00:15:01.380 fused_ordering(684) 00:15:01.380 fused_ordering(685) 00:15:01.380 fused_ordering(686) 00:15:01.380 fused_ordering(687) 00:15:01.380 fused_ordering(688) 00:15:01.380 fused_ordering(689) 00:15:01.380 fused_ordering(690) 00:15:01.380 fused_ordering(691) 00:15:01.380 fused_ordering(692) 00:15:01.380 fused_ordering(693) 00:15:01.380 fused_ordering(694) 00:15:01.380 fused_ordering(695) 00:15:01.380 fused_ordering(696) 00:15:01.380 fused_ordering(697) 00:15:01.380 fused_ordering(698) 00:15:01.380 fused_ordering(699) 00:15:01.380 fused_ordering(700) 00:15:01.380 fused_ordering(701) 00:15:01.380 fused_ordering(702) 00:15:01.380 fused_ordering(703) 00:15:01.380 fused_ordering(704) 00:15:01.380 fused_ordering(705) 00:15:01.380 fused_ordering(706) 00:15:01.380 fused_ordering(707) 00:15:01.380 fused_ordering(708) 00:15:01.380 fused_ordering(709) 00:15:01.380 fused_ordering(710) 00:15:01.380 fused_ordering(711) 00:15:01.380 fused_ordering(712) 00:15:01.380 fused_ordering(713) 00:15:01.380 fused_ordering(714) 00:15:01.380 fused_ordering(715) 00:15:01.380 fused_ordering(716) 00:15:01.380 fused_ordering(717) 00:15:01.380 fused_ordering(718) 00:15:01.380 fused_ordering(719) 00:15:01.380 fused_ordering(720) 00:15:01.380 fused_ordering(721) 00:15:01.380 fused_ordering(722) 00:15:01.380 fused_ordering(723) 00:15:01.380 fused_ordering(724) 00:15:01.380 fused_ordering(725) 00:15:01.380 fused_ordering(726) 00:15:01.380 fused_ordering(727) 00:15:01.380 fused_ordering(728) 00:15:01.380 fused_ordering(729) 00:15:01.380 fused_ordering(730) 00:15:01.380 fused_ordering(731) 00:15:01.380 fused_ordering(732) 00:15:01.380 fused_ordering(733) 00:15:01.380 fused_ordering(734) 00:15:01.380 fused_ordering(735) 00:15:01.380 fused_ordering(736) 00:15:01.380 fused_ordering(737) 00:15:01.380 fused_ordering(738) 00:15:01.380 fused_ordering(739) 00:15:01.380 fused_ordering(740) 00:15:01.380 fused_ordering(741) 00:15:01.380 fused_ordering(742) 00:15:01.380 fused_ordering(743) 00:15:01.380 fused_ordering(744) 00:15:01.380 fused_ordering(745) 00:15:01.380 fused_ordering(746) 00:15:01.380 fused_ordering(747) 00:15:01.380 fused_ordering(748) 00:15:01.380 fused_ordering(749) 00:15:01.380 fused_ordering(750) 00:15:01.380 fused_ordering(751) 00:15:01.380 fused_ordering(752) 00:15:01.380 fused_ordering(753) 00:15:01.380 fused_ordering(754) 00:15:01.380 fused_ordering(755) 00:15:01.380 fused_ordering(756) 00:15:01.380 fused_ordering(757) 00:15:01.380 fused_ordering(758) 00:15:01.380 fused_ordering(759) 00:15:01.380 fused_ordering(760) 00:15:01.380 fused_ordering(761) 00:15:01.380 fused_ordering(762) 00:15:01.380 fused_ordering(763) 00:15:01.380 fused_ordering(764) 00:15:01.380 fused_ordering(765) 00:15:01.380 fused_ordering(766) 00:15:01.380 fused_ordering(767) 00:15:01.380 fused_ordering(768) 00:15:01.380 fused_ordering(769) 00:15:01.380 fused_ordering(770) 00:15:01.380 fused_ordering(771) 00:15:01.380 fused_ordering(772) 00:15:01.380 fused_ordering(773) 00:15:01.380 fused_ordering(774) 00:15:01.380 fused_ordering(775) 00:15:01.380 fused_ordering(776) 00:15:01.380 fused_ordering(777) 00:15:01.380 fused_ordering(778) 00:15:01.380 fused_ordering(779) 00:15:01.380 fused_ordering(780) 00:15:01.380 fused_ordering(781) 00:15:01.380 fused_ordering(782) 00:15:01.380 fused_ordering(783) 00:15:01.380 fused_ordering(784) 00:15:01.380 fused_ordering(785) 00:15:01.380 fused_ordering(786) 00:15:01.380 fused_ordering(787) 00:15:01.380 fused_ordering(788) 00:15:01.380 fused_ordering(789) 00:15:01.380 fused_ordering(790) 00:15:01.380 fused_ordering(791) 00:15:01.380 fused_ordering(792) 00:15:01.380 fused_ordering(793) 00:15:01.380 fused_ordering(794) 00:15:01.380 fused_ordering(795) 00:15:01.380 fused_ordering(796) 00:15:01.380 fused_ordering(797) 00:15:01.380 fused_ordering(798) 00:15:01.380 fused_ordering(799) 00:15:01.380 fused_ordering(800) 00:15:01.380 fused_ordering(801) 00:15:01.380 fused_ordering(802) 00:15:01.380 fused_ordering(803) 00:15:01.380 fused_ordering(804) 00:15:01.380 fused_ordering(805) 00:15:01.380 fused_ordering(806) 00:15:01.380 fused_ordering(807) 00:15:01.380 fused_ordering(808) 00:15:01.380 fused_ordering(809) 00:15:01.380 fused_ordering(810) 00:15:01.380 fused_ordering(811) 00:15:01.380 fused_ordering(812) 00:15:01.380 fused_ordering(813) 00:15:01.380 fused_ordering(814) 00:15:01.380 fused_ordering(815) 00:15:01.380 fused_ordering(816) 00:15:01.380 fused_ordering(817) 00:15:01.380 fused_ordering(818) 00:15:01.380 fused_ordering(819) 00:15:01.380 fused_ordering(820) 00:15:01.948 fused_ordering(821) 00:15:01.948 fused_ordering(822) 00:15:01.948 fused_ordering(823) 00:15:01.948 fused_ordering(824) 00:15:01.948 fused_ordering(825) 00:15:01.948 fused_ordering(826) 00:15:01.948 fused_ordering(827) 00:15:01.948 fused_ordering(828) 00:15:01.948 fused_ordering(829) 00:15:01.948 fused_ordering(830) 00:15:01.948 fused_ordering(831) 00:15:01.948 fused_ordering(832) 00:15:01.948 fused_ordering(833) 00:15:01.948 fused_ordering(834) 00:15:01.948 fused_ordering(835) 00:15:01.948 fused_ordering(836) 00:15:01.948 fused_ordering(837) 00:15:01.948 fused_ordering(838) 00:15:01.948 fused_ordering(839) 00:15:01.948 fused_ordering(840) 00:15:01.948 fused_ordering(841) 00:15:01.948 fused_ordering(842) 00:15:01.948 fused_ordering(843) 00:15:01.948 fused_ordering(844) 00:15:01.948 fused_ordering(845) 00:15:01.948 fused_ordering(846) 00:15:01.948 fused_ordering(847) 00:15:01.948 fused_ordering(848) 00:15:01.948 fused_ordering(849) 00:15:01.948 fused_ordering(850) 00:15:01.948 fused_ordering(851) 00:15:01.948 fused_ordering(852) 00:15:01.948 fused_ordering(853) 00:15:01.948 fused_ordering(854) 00:15:01.948 fused_ordering(855) 00:15:01.948 fused_ordering(856) 00:15:01.948 fused_ordering(857) 00:15:01.948 fused_ordering(858) 00:15:01.948 fused_ordering(859) 00:15:01.948 fused_ordering(860) 00:15:01.948 fused_ordering(861) 00:15:01.948 fused_ordering(862) 00:15:01.948 fused_ordering(863) 00:15:01.948 fused_ordering(864) 00:15:01.948 fused_ordering(865) 00:15:01.948 fused_ordering(866) 00:15:01.948 fused_ordering(867) 00:15:01.948 fused_ordering(868) 00:15:01.948 fused_ordering(869) 00:15:01.948 fused_ordering(870) 00:15:01.948 fused_ordering(871) 00:15:01.948 fused_ordering(872) 00:15:01.948 fused_ordering(873) 00:15:01.948 fused_ordering(874) 00:15:01.948 fused_ordering(875) 00:15:01.948 fused_ordering(876) 00:15:01.948 fused_ordering(877) 00:15:01.948 fused_ordering(878) 00:15:01.948 fused_ordering(879) 00:15:01.948 fused_ordering(880) 00:15:01.948 fused_ordering(881) 00:15:01.948 fused_ordering(882) 00:15:01.948 fused_ordering(883) 00:15:01.948 fused_ordering(884) 00:15:01.948 fused_ordering(885) 00:15:01.948 fused_ordering(886) 00:15:01.948 fused_ordering(887) 00:15:01.948 fused_ordering(888) 00:15:01.948 fused_ordering(889) 00:15:01.948 fused_ordering(890) 00:15:01.948 fused_ordering(891) 00:15:01.948 fused_ordering(892) 00:15:01.948 fused_ordering(893) 00:15:01.948 fused_ordering(894) 00:15:01.948 fused_ordering(895) 00:15:01.948 fused_ordering(896) 00:15:01.948 fused_ordering(897) 00:15:01.948 fused_ordering(898) 00:15:01.948 fused_ordering(899) 00:15:01.948 fused_ordering(900) 00:15:01.948 fused_ordering(901) 00:15:01.948 fused_ordering(902) 00:15:01.948 fused_ordering(903) 00:15:01.948 fused_ordering(904) 00:15:01.948 fused_ordering(905) 00:15:01.948 fused_ordering(906) 00:15:01.948 fused_ordering(907) 00:15:01.948 fused_ordering(908) 00:15:01.948 fused_ordering(909) 00:15:01.948 fused_ordering(910) 00:15:01.948 fused_ordering(911) 00:15:01.948 fused_ordering(912) 00:15:01.948 fused_ordering(913) 00:15:01.948 fused_ordering(914) 00:15:01.948 fused_ordering(915) 00:15:01.948 fused_ordering(916) 00:15:01.948 fused_ordering(917) 00:15:01.948 fused_ordering(918) 00:15:01.948 fused_ordering(919) 00:15:01.948 fused_ordering(920) 00:15:01.949 fused_ordering(921) 00:15:01.949 fused_ordering(922) 00:15:01.949 fused_ordering(923) 00:15:01.949 fused_ordering(924) 00:15:01.949 fused_ordering(925) 00:15:01.949 fused_ordering(926) 00:15:01.949 fused_ordering(927) 00:15:01.949 fused_ordering(928) 00:15:01.949 fused_ordering(929) 00:15:01.949 fused_ordering(930) 00:15:01.949 fused_ordering(931) 00:15:01.949 fused_ordering(932) 00:15:01.949 fused_ordering(933) 00:15:01.949 fused_ordering(934) 00:15:01.949 fused_ordering(935) 00:15:01.949 fused_ordering(936) 00:15:01.949 fused_ordering(937) 00:15:01.949 fused_ordering(938) 00:15:01.949 fused_ordering(939) 00:15:01.949 fused_ordering(940) 00:15:01.949 fused_ordering(941) 00:15:01.949 fused_ordering(942) 00:15:01.949 fused_ordering(943) 00:15:01.949 fused_ordering(944) 00:15:01.949 fused_ordering(945) 00:15:01.949 fused_ordering(946) 00:15:01.949 fused_ordering(947) 00:15:01.949 fused_ordering(948) 00:15:01.949 fused_ordering(949) 00:15:01.949 fused_ordering(950) 00:15:01.949 fused_ordering(951) 00:15:01.949 fused_ordering(952) 00:15:01.949 fused_ordering(953) 00:15:01.949 fused_ordering(954) 00:15:01.949 fused_ordering(955) 00:15:01.949 fused_ordering(956) 00:15:01.949 fused_ordering(957) 00:15:01.949 fused_ordering(958) 00:15:01.949 fused_ordering(959) 00:15:01.949 fused_ordering(960) 00:15:01.949 fused_ordering(961) 00:15:01.949 fused_ordering(962) 00:15:01.949 fused_ordering(963) 00:15:01.949 fused_ordering(964) 00:15:01.949 fused_ordering(965) 00:15:01.949 fused_ordering(966) 00:15:01.949 fused_ordering(967) 00:15:01.949 fused_ordering(968) 00:15:01.949 fused_ordering(969) 00:15:01.949 fused_ordering(970) 00:15:01.949 fused_ordering(971) 00:15:01.949 fused_ordering(972) 00:15:01.949 fused_ordering(973) 00:15:01.949 fused_ordering(974) 00:15:01.949 fused_ordering(975) 00:15:01.949 fused_ordering(976) 00:15:01.949 fused_ordering(977) 00:15:01.949 fused_ordering(978) 00:15:01.949 fused_ordering(979) 00:15:01.949 fused_ordering(980) 00:15:01.949 fused_ordering(981) 00:15:01.949 fused_ordering(982) 00:15:01.949 fused_ordering(983) 00:15:01.949 fused_ordering(984) 00:15:01.949 fused_ordering(985) 00:15:01.949 fused_ordering(986) 00:15:01.949 fused_ordering(987) 00:15:01.949 fused_ordering(988) 00:15:01.949 fused_ordering(989) 00:15:01.949 fused_ordering(990) 00:15:01.949 fused_ordering(991) 00:15:01.949 fused_ordering(992) 00:15:01.949 fused_ordering(993) 00:15:01.949 fused_ordering(994) 00:15:01.949 fused_ordering(995) 00:15:01.949 fused_ordering(996) 00:15:01.949 fused_ordering(997) 00:15:01.949 fused_ordering(998) 00:15:01.949 fused_ordering(999) 00:15:01.949 fused_ordering(1000) 00:15:01.949 fused_ordering(1001) 00:15:01.949 fused_ordering(1002) 00:15:01.949 fused_ordering(1003) 00:15:01.949 fused_ordering(1004) 00:15:01.949 fused_ordering(1005) 00:15:01.949 fused_ordering(1006) 00:15:01.949 fused_ordering(1007) 00:15:01.949 fused_ordering(1008) 00:15:01.949 fused_ordering(1009) 00:15:01.949 fused_ordering(1010) 00:15:01.949 fused_ordering(1011) 00:15:01.949 fused_ordering(1012) 00:15:01.949 fused_ordering(1013) 00:15:01.949 fused_ordering(1014) 00:15:01.949 fused_ordering(1015) 00:15:01.949 fused_ordering(1016) 00:15:01.949 fused_ordering(1017) 00:15:01.949 fused_ordering(1018) 00:15:01.949 fused_ordering(1019) 00:15:01.949 fused_ordering(1020) 00:15:01.949 fused_ordering(1021) 00:15:01.949 fused_ordering(1022) 00:15:01.949 fused_ordering(1023) 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:01.949 rmmod nvme_tcp 00:15:01.949 rmmod nvme_fabrics 00:15:01.949 rmmod nvme_keyring 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1944898 ']' 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1944898 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1944898 ']' 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1944898 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1944898 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1944898' 00:15:01.949 killing process with pid 1944898 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1944898 00:15:01.949 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1944898 00:15:02.208 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:02.208 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:02.208 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:02.208 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:02.208 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:02.208 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:02.208 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:02.208 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:02.208 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:02.208 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.208 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.208 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.113 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:04.113 00:15:04.113 real 0m9.959s 00:15:04.113 user 0m4.668s 00:15:04.113 sys 0m5.343s 00:15:04.113 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.113 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.113 ************************************ 00:15:04.113 END TEST nvmf_fused_ordering 00:15:04.113 ************************************ 00:15:04.113 12:59:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:04.113 12:59:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:04.113 12:59:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.113 12:59:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:04.113 ************************************ 00:15:04.113 START TEST nvmf_ns_masking 00:15:04.113 ************************************ 00:15:04.113 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:04.374 * Looking for test storage... 00:15:04.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:04.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.374 --rc genhtml_branch_coverage=1 00:15:04.374 --rc genhtml_function_coverage=1 00:15:04.374 --rc genhtml_legend=1 00:15:04.374 --rc geninfo_all_blocks=1 00:15:04.374 --rc geninfo_unexecuted_blocks=1 00:15:04.374 00:15:04.374 ' 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:04.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.374 --rc genhtml_branch_coverage=1 00:15:04.374 --rc genhtml_function_coverage=1 00:15:04.374 --rc genhtml_legend=1 00:15:04.374 --rc geninfo_all_blocks=1 00:15:04.374 --rc geninfo_unexecuted_blocks=1 00:15:04.374 00:15:04.374 ' 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:04.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.374 --rc genhtml_branch_coverage=1 00:15:04.374 --rc genhtml_function_coverage=1 00:15:04.374 --rc genhtml_legend=1 00:15:04.374 --rc geninfo_all_blocks=1 00:15:04.374 --rc geninfo_unexecuted_blocks=1 00:15:04.374 00:15:04.374 ' 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:04.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.374 --rc genhtml_branch_coverage=1 00:15:04.374 --rc genhtml_function_coverage=1 00:15:04.374 --rc genhtml_legend=1 00:15:04.374 --rc geninfo_all_blocks=1 00:15:04.374 --rc geninfo_unexecuted_blocks=1 00:15:04.374 00:15:04.374 ' 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.374 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:04.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=30431f1d-a711-49c2-adb2-4b77222003fa 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a29efad1-2dbe-40ff-8df9-8635fda48d2c 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b78a5b24-f05a-4737-8e7e-f8733d2054e8 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:04.375 12:59:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:10.947 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:10.947 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:10.947 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:10.947 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:10.947 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:10.947 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:10.947 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:10.947 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:10.947 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:10.947 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:10.947 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:10.948 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:10.948 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:10.948 Found net devices under 0000:86:00.0: cvl_0_0 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:10.948 Found net devices under 0000:86:00.1: cvl_0_1 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:10.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:15:10.948 00:15:10.948 --- 10.0.0.2 ping statistics --- 00:15:10.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.948 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:15:10.948 00:15:10.948 --- 10.0.0.1 ping statistics --- 00:15:10.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.948 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1948686 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1948686 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1948686 ']' 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.948 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:10.948 [2024-11-29 12:59:09.824602] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:15:10.948 [2024-11-29 12:59:09.824649] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.948 [2024-11-29 12:59:09.890853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.948 [2024-11-29 12:59:09.932093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.948 [2024-11-29 12:59:09.932130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.948 [2024-11-29 12:59:09.932138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.948 [2024-11-29 12:59:09.932145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.948 [2024-11-29 12:59:09.932151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.948 [2024-11-29 12:59:09.932714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.948 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.948 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:10.948 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:10.948 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:10.948 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:10.948 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.948 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:10.948 [2024-11-29 12:59:10.238617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.948 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:10.948 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:10.948 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:10.948 Malloc1 00:15:10.948 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:10.948 Malloc2 00:15:10.948 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:11.206 12:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:11.464 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.464 [2024-11-29 12:59:11.211390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.464 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:11.464 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b78a5b24-f05a-4737-8e7e-f8733d2054e8 -a 10.0.0.2 -s 4420 -i 4 00:15:11.723 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:11.723 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:11.723 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.723 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:11.723 12:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:13.625 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:13.625 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:13.625 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.625 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:13.625 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.625 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:13.625 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:13.625 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:13.625 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:13.625 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:13.625 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:13.625 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:13.625 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:13.884 [ 0]:0x1 00:15:13.884 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:13.884 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:13.884 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2973ada5a7a24c2c9c82b2abc79024de 00:15:13.884 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2973ada5a7a24c2c9c82b2abc79024de != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.884 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.143 [ 0]:0x1 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2973ada5a7a24c2c9c82b2abc79024de 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2973ada5a7a24c2c9c82b2abc79024de != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:14.143 [ 1]:0x2 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=708bef7c809c4bd2bb3d6fdb11c36b1e 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 708bef7c809c4bd2bb3d6fdb11c36b1e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.143 12:59:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.402 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:14.660 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:14.660 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b78a5b24-f05a-4737-8e7e-f8733d2054e8 -a 10.0.0.2 -s 4420 -i 4 00:15:14.919 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:14.919 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:14.919 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:14.919 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:14.919 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:14.919 12:59:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:16.823 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:17.082 [ 0]:0x2 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=708bef7c809c4bd2bb3d6fdb11c36b1e 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 708bef7c809c4bd2bb3d6fdb11c36b1e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.082 12:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:17.341 [ 0]:0x1 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2973ada5a7a24c2c9c82b2abc79024de 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2973ada5a7a24c2c9c82b2abc79024de != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.341 [ 1]:0x2 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=708bef7c809c4bd2bb3d6fdb11c36b1e 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 708bef7c809c4bd2bb3d6fdb11c36b1e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.341 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:17.617 [ 0]:0x2 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.617 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.618 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=708bef7c809c4bd2bb3d6fdb11c36b1e 00:15:17.618 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 708bef7c809c4bd2bb3d6fdb11c36b1e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.618 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:17.618 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:17.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.875 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:17.875 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:17.875 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b78a5b24-f05a-4737-8e7e-f8733d2054e8 -a 10.0.0.2 -s 4420 -i 4 00:15:18.133 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:18.133 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:18.133 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:18.133 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:18.133 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:18.133 12:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:20.037 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:20.037 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:20.037 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.037 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:20.037 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.037 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:20.037 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:20.037 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:20.295 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:20.295 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:20.295 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:20.295 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:20.295 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:20.295 [ 0]:0x1 00:15:20.295 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:20.295 12:59:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:20.295 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2973ada5a7a24c2c9c82b2abc79024de 00:15:20.295 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2973ada5a7a24c2c9c82b2abc79024de != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.295 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:20.295 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:20.295 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:20.295 [ 1]:0x2 00:15:20.295 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:20.295 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=708bef7c809c4bd2bb3d6fdb11c36b1e 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 708bef7c809c4bd2bb3d6fdb11c36b1e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:20.554 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:20.813 [ 0]:0x2 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=708bef7c809c4bd2bb3d6fdb11c36b1e 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 708bef7c809c4bd2bb3d6fdb11c36b1e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:20.813 [2024-11-29 12:59:20.594293] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:20.813 request: 00:15:20.813 { 00:15:20.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.813 "nsid": 2, 00:15:20.813 "host": "nqn.2016-06.io.spdk:host1", 00:15:20.813 "method": "nvmf_ns_remove_host", 00:15:20.813 "req_id": 1 00:15:20.813 } 00:15:20.813 Got JSON-RPC error response 00:15:20.813 response: 00:15:20.813 { 00:15:20.813 "code": -32602, 00:15:20.813 "message": "Invalid parameters" 00:15:20.813 } 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:20.813 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:21.072 [ 0]:0x2 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=708bef7c809c4bd2bb3d6fdb11c36b1e 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 708bef7c809c4bd2bb3d6fdb11c36b1e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:21.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1950683 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1950683 /var/tmp/host.sock 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1950683 ']' 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:21.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.072 12:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:21.072 [2024-11-29 12:59:20.822665] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:15:21.072 [2024-11-29 12:59:20.822709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1950683 ] 00:15:21.072 [2024-11-29 12:59:20.884635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.330 [2024-11-29 12:59:20.927540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.330 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.330 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:21.330 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.589 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:21.847 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 30431f1d-a711-49c2-adb2-4b77222003fa 00:15:21.847 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:21.847 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 30431F1DA71149C2ADB24B77222003FA -i 00:15:22.106 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a29efad1-2dbe-40ff-8df9-8635fda48d2c 00:15:22.106 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:22.106 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A29EFAD12DBE40FF8DF98635FDA48D2C -i 00:15:22.106 12:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:22.364 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:22.623 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:22.623 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:23.191 nvme0n1 00:15:23.191 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:23.191 12:59:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:23.450 nvme1n2 00:15:23.450 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:23.450 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:23.450 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:23.450 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:23.450 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:23.710 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:23.710 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:23.710 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:23.710 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:23.969 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 30431f1d-a711-49c2-adb2-4b77222003fa == \3\0\4\3\1\f\1\d\-\a\7\1\1\-\4\9\c\2\-\a\d\b\2\-\4\b\7\7\2\2\2\0\0\3\f\a ]] 00:15:23.969 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:23.969 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:23.969 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:23.969 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a29efad1-2dbe-40ff-8df9-8635fda48d2c == \a\2\9\e\f\a\d\1\-\2\d\b\e\-\4\0\f\f\-\8\d\f\9\-\8\6\3\5\f\d\a\4\8\d\2\c ]] 00:15:23.969 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:24.228 12:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 30431f1d-a711-49c2-adb2-4b77222003fa 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 30431F1DA71149C2ADB24B77222003FA 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 30431F1DA71149C2ADB24B77222003FA 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:24.488 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 30431F1DA71149C2ADB24B77222003FA 00:15:24.488 [2024-11-29 12:59:24.296511] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:24.488 [2024-11-29 12:59:24.296549] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:24.488 [2024-11-29 12:59:24.296558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.488 request: 00:15:24.488 { 00:15:24.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.489 "namespace": { 00:15:24.489 "bdev_name": "invalid", 00:15:24.489 "nsid": 1, 00:15:24.489 "nguid": "30431F1DA71149C2ADB24B77222003FA", 00:15:24.489 "no_auto_visible": false, 00:15:24.489 "hide_metadata": false 00:15:24.489 }, 00:15:24.489 "method": "nvmf_subsystem_add_ns", 00:15:24.489 "req_id": 1 00:15:24.489 } 00:15:24.489 Got JSON-RPC error response 00:15:24.489 response: 00:15:24.489 { 00:15:24.489 "code": -32602, 00:15:24.489 "message": "Invalid parameters" 00:15:24.489 } 00:15:24.748 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:24.748 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:24.748 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:24.748 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:24.748 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 30431f1d-a711-49c2-adb2-4b77222003fa 00:15:24.748 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:24.748 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 30431F1DA71149C2ADB24B77222003FA -i 00:15:24.748 12:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1950683 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1950683 ']' 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1950683 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1950683 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1950683' 00:15:27.282 killing process with pid 1950683 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1950683 00:15:27.282 12:59:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1950683 00:15:27.282 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:27.541 rmmod nvme_tcp 00:15:27.541 rmmod nvme_fabrics 00:15:27.541 rmmod nvme_keyring 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1948686 ']' 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1948686 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1948686 ']' 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1948686 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.541 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1948686 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1948686' 00:15:27.801 killing process with pid 1948686 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1948686 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1948686 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.801 12:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.339 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:30.339 00:15:30.339 real 0m25.711s 00:15:30.339 user 0m30.964s 00:15:30.339 sys 0m6.755s 00:15:30.339 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.339 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:30.339 ************************************ 00:15:30.339 END TEST nvmf_ns_masking 00:15:30.339 ************************************ 00:15:30.339 12:59:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:30.339 12:59:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:30.339 12:59:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:30.339 12:59:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.339 12:59:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:30.339 ************************************ 00:15:30.339 START TEST nvmf_nvme_cli 00:15:30.339 ************************************ 00:15:30.339 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:30.339 * Looking for test storage... 00:15:30.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.339 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:30.339 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:30.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.340 --rc genhtml_branch_coverage=1 00:15:30.340 --rc genhtml_function_coverage=1 00:15:30.340 --rc genhtml_legend=1 00:15:30.340 --rc geninfo_all_blocks=1 00:15:30.340 --rc geninfo_unexecuted_blocks=1 00:15:30.340 00:15:30.340 ' 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:30.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.340 --rc genhtml_branch_coverage=1 00:15:30.340 --rc genhtml_function_coverage=1 00:15:30.340 --rc genhtml_legend=1 00:15:30.340 --rc geninfo_all_blocks=1 00:15:30.340 --rc geninfo_unexecuted_blocks=1 00:15:30.340 00:15:30.340 ' 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:30.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.340 --rc genhtml_branch_coverage=1 00:15:30.340 --rc genhtml_function_coverage=1 00:15:30.340 --rc genhtml_legend=1 00:15:30.340 --rc geninfo_all_blocks=1 00:15:30.340 --rc geninfo_unexecuted_blocks=1 00:15:30.340 00:15:30.340 ' 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:30.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.340 --rc genhtml_branch_coverage=1 00:15:30.340 --rc genhtml_function_coverage=1 00:15:30.340 --rc genhtml_legend=1 00:15:30.340 --rc geninfo_all_blocks=1 00:15:30.340 --rc geninfo_unexecuted_blocks=1 00:15:30.340 00:15:30.340 ' 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.340 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:30.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:30.341 12:59:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:35.611 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:35.611 12:59:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:35.611 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:35.611 Found net devices under 0000:86:00.0: cvl_0_0 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:35.611 Found net devices under 0000:86:00.1: cvl_0_1 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:35.611 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:35.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:15:35.611 00:15:35.611 --- 10.0.0.2 ping statistics --- 00:15:35.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.612 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:15:35.612 00:15:35.612 --- 10.0.0.1 ping statistics --- 00:15:35.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.612 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1955281 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1955281 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1955281 ']' 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.612 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.612 [2024-11-29 12:59:35.334708] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:15:35.612 [2024-11-29 12:59:35.334756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.612 [2024-11-29 12:59:35.401341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.871 [2024-11-29 12:59:35.446171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.871 [2024-11-29 12:59:35.446219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.871 [2024-11-29 12:59:35.446226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.871 [2024-11-29 12:59:35.446232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.871 [2024-11-29 12:59:35.446238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.871 [2024-11-29 12:59:35.447811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.871 [2024-11-29 12:59:35.447910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.871 [2024-11-29 12:59:35.447994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.871 [2024-11-29 12:59:35.447997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.871 [2024-11-29 12:59:35.586292] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.871 Malloc0 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.871 Malloc1 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.871 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.871 [2024-11-29 12:59:35.687324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.130 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.130 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:36.130 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.130 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:36.130 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.130 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:36.130 00:15:36.131 Discovery Log Number of Records 2, Generation counter 2 00:15:36.131 =====Discovery Log Entry 0====== 00:15:36.131 trtype: tcp 00:15:36.131 adrfam: ipv4 00:15:36.131 subtype: current discovery subsystem 00:15:36.131 treq: not required 00:15:36.131 portid: 0 00:15:36.131 trsvcid: 4420 00:15:36.131 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:36.131 traddr: 10.0.0.2 00:15:36.131 eflags: explicit discovery connections, duplicate discovery information 00:15:36.131 sectype: none 00:15:36.131 =====Discovery Log Entry 1====== 00:15:36.131 trtype: tcp 00:15:36.131 adrfam: ipv4 00:15:36.131 subtype: nvme subsystem 00:15:36.131 treq: not required 00:15:36.131 portid: 0 00:15:36.131 trsvcid: 4420 00:15:36.131 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:36.131 traddr: 10.0.0.2 00:15:36.131 eflags: none 00:15:36.131 sectype: none 00:15:36.131 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:36.131 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:36.131 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:36.131 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:36.131 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:36.131 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:36.131 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:36.131 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:36.131 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:36.131 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:36.131 12:59:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:37.508 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:37.508 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:37.508 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:37.508 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:37.508 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:37.509 12:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:39.410 /dev/nvme0n2 ]] 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:39.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:39.410 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:39.669 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:39.669 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:39.669 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:39.669 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:39.669 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.669 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.669 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.669 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.669 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:39.669 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:39.669 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:39.669 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:39.669 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:39.670 rmmod nvme_tcp 00:15:39.670 rmmod nvme_fabrics 00:15:39.670 rmmod nvme_keyring 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1955281 ']' 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1955281 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1955281 ']' 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1955281 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1955281 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1955281' 00:15:39.670 killing process with pid 1955281 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1955281 00:15:39.670 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1955281 00:15:39.928 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:39.928 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:39.928 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:39.928 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:39.928 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:39.928 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:39.928 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:39.928 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:39.928 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:39.928 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.928 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:39.928 12:59:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.463 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:42.463 00:15:42.463 real 0m11.958s 00:15:42.463 user 0m18.069s 00:15:42.463 sys 0m4.621s 00:15:42.463 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.463 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:42.463 ************************************ 00:15:42.463 END TEST nvmf_nvme_cli 00:15:42.463 ************************************ 00:15:42.463 12:59:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:42.463 12:59:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:42.463 12:59:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:42.463 12:59:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.463 12:59:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:42.463 ************************************ 00:15:42.463 START TEST nvmf_vfio_user 00:15:42.463 ************************************ 00:15:42.463 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:42.463 * Looking for test storage... 00:15:42.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:42.463 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:42.463 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:15:42.463 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:42.463 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:42.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.464 --rc genhtml_branch_coverage=1 00:15:42.464 --rc genhtml_function_coverage=1 00:15:42.464 --rc genhtml_legend=1 00:15:42.464 --rc geninfo_all_blocks=1 00:15:42.464 --rc geninfo_unexecuted_blocks=1 00:15:42.464 00:15:42.464 ' 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:42.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.464 --rc genhtml_branch_coverage=1 00:15:42.464 --rc genhtml_function_coverage=1 00:15:42.464 --rc genhtml_legend=1 00:15:42.464 --rc geninfo_all_blocks=1 00:15:42.464 --rc geninfo_unexecuted_blocks=1 00:15:42.464 00:15:42.464 ' 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:42.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.464 --rc genhtml_branch_coverage=1 00:15:42.464 --rc genhtml_function_coverage=1 00:15:42.464 --rc genhtml_legend=1 00:15:42.464 --rc geninfo_all_blocks=1 00:15:42.464 --rc geninfo_unexecuted_blocks=1 00:15:42.464 00:15:42.464 ' 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:42.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.464 --rc genhtml_branch_coverage=1 00:15:42.464 --rc genhtml_function_coverage=1 00:15:42.464 --rc genhtml_legend=1 00:15:42.464 --rc geninfo_all_blocks=1 00:15:42.464 --rc geninfo_unexecuted_blocks=1 00:15:42.464 00:15:42.464 ' 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:42.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:42.464 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1956465 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1956465' 00:15:42.465 Process pid: 1956465 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1956465 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1956465 ']' 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.465 12:59:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:42.465 [2024-11-29 12:59:41.995156] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:15:42.465 [2024-11-29 12:59:41.995205] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.465 [2024-11-29 12:59:42.056941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.465 [2024-11-29 12:59:42.096745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.465 [2024-11-29 12:59:42.096786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.465 [2024-11-29 12:59:42.096793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.465 [2024-11-29 12:59:42.096799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.465 [2024-11-29 12:59:42.096804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.465 [2024-11-29 12:59:42.098350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.465 [2024-11-29 12:59:42.098451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.465 [2024-11-29 12:59:42.098537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.465 [2024-11-29 12:59:42.098539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.465 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.465 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:42.465 12:59:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:43.401 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:43.659 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:43.659 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:43.659 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:43.659 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:43.659 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:43.917 Malloc1 00:15:43.918 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:44.176 12:59:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:44.434 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:44.693 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:44.693 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:44.693 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:44.693 Malloc2 00:15:44.693 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:44.952 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:45.209 12:59:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:45.469 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:45.469 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:45.469 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:45.469 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:45.469 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:45.469 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:45.469 [2024-11-29 12:59:45.116300] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:15:45.469 [2024-11-29 12:59:45.116347] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1957040 ] 00:15:45.469 [2024-11-29 12:59:45.157892] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:45.469 [2024-11-29 12:59:45.163230] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:45.469 [2024-11-29 12:59:45.163255] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f61eaf5d000 00:15:45.469 [2024-11-29 12:59:45.164226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.469 [2024-11-29 12:59:45.165227] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.469 [2024-11-29 12:59:45.166236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.469 [2024-11-29 12:59:45.167240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:45.469 [2024-11-29 12:59:45.168246] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:45.469 [2024-11-29 12:59:45.169252] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.469 [2024-11-29 12:59:45.170260] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:45.469 [2024-11-29 12:59:45.171264] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:45.469 [2024-11-29 12:59:45.172271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:45.469 [2024-11-29 12:59:45.172280] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f61eaf52000 00:15:45.469 [2024-11-29 12:59:45.173224] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:45.469 [2024-11-29 12:59:45.187410] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:45.469 [2024-11-29 12:59:45.187439] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:45.469 [2024-11-29 12:59:45.190384] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:45.469 [2024-11-29 12:59:45.190425] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:45.469 [2024-11-29 12:59:45.190495] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:45.469 [2024-11-29 12:59:45.190512] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:45.469 [2024-11-29 12:59:45.190517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:45.469 [2024-11-29 12:59:45.191378] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:45.469 [2024-11-29 12:59:45.191390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:45.469 [2024-11-29 12:59:45.191396] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:45.469 [2024-11-29 12:59:45.192381] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:45.469 [2024-11-29 12:59:45.192390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:45.469 [2024-11-29 12:59:45.192400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:45.469 [2024-11-29 12:59:45.193386] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:45.469 [2024-11-29 12:59:45.193394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:45.469 [2024-11-29 12:59:45.194387] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:45.469 [2024-11-29 12:59:45.194396] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:45.469 [2024-11-29 12:59:45.194400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:45.469 [2024-11-29 12:59:45.194407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:45.469 [2024-11-29 12:59:45.194514] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:45.469 [2024-11-29 12:59:45.194519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:45.469 [2024-11-29 12:59:45.194524] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:45.469 [2024-11-29 12:59:45.195398] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:45.469 [2024-11-29 12:59:45.196401] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:45.469 [2024-11-29 12:59:45.197407] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:45.469 [2024-11-29 12:59:45.198404] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:45.469 [2024-11-29 12:59:45.198490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:45.469 [2024-11-29 12:59:45.199423] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:45.469 [2024-11-29 12:59:45.199430] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:45.469 [2024-11-29 12:59:45.199435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:45.469 [2024-11-29 12:59:45.199452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:45.469 [2024-11-29 12:59:45.199462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199482] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:45.470 [2024-11-29 12:59:45.199487] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.470 [2024-11-29 12:59:45.199490] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.470 [2024-11-29 12:59:45.199504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.470 [2024-11-29 12:59:45.199555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:45.470 [2024-11-29 12:59:45.199568] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:45.470 [2024-11-29 12:59:45.199572] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:45.470 [2024-11-29 12:59:45.199577] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:45.470 [2024-11-29 12:59:45.199581] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:45.470 [2024-11-29 12:59:45.199586] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:45.470 [2024-11-29 12:59:45.199590] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:45.470 [2024-11-29 12:59:45.199594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:45.470 [2024-11-29 12:59:45.199626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:45.470 [2024-11-29 12:59:45.199636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.470 [2024-11-29 12:59:45.199644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.470 [2024-11-29 12:59:45.199652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.470 [2024-11-29 12:59:45.199659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.470 [2024-11-29 12:59:45.199663] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199681] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:45.470 [2024-11-29 12:59:45.199689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:45.470 [2024-11-29 12:59:45.199695] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:45.470 [2024-11-29 12:59:45.199700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:45.470 [2024-11-29 12:59:45.199733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:45.470 [2024-11-29 12:59:45.199786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199800] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:45.470 [2024-11-29 12:59:45.199805] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:45.470 [2024-11-29 12:59:45.199808] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.470 [2024-11-29 12:59:45.199814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:45.470 [2024-11-29 12:59:45.199825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:45.470 [2024-11-29 12:59:45.199836] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:45.470 [2024-11-29 12:59:45.199845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199858] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:45.470 [2024-11-29 12:59:45.199862] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.470 [2024-11-29 12:59:45.199865] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.470 [2024-11-29 12:59:45.199870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.470 [2024-11-29 12:59:45.199898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:45.470 [2024-11-29 12:59:45.199909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199922] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:45.470 [2024-11-29 12:59:45.199926] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.470 [2024-11-29 12:59:45.199929] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.470 [2024-11-29 12:59:45.199934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.470 [2024-11-29 12:59:45.199953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:45.470 [2024-11-29 12:59:45.199962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.199997] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:45.470 [2024-11-29 12:59:45.200002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:45.470 [2024-11-29 12:59:45.200007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:45.470 [2024-11-29 12:59:45.200023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:45.470 [2024-11-29 12:59:45.200032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:45.470 [2024-11-29 12:59:45.200043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:45.470 [2024-11-29 12:59:45.200053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:45.470 [2024-11-29 12:59:45.200063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:45.470 [2024-11-29 12:59:45.200072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:45.470 [2024-11-29 12:59:45.200082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:45.470 [2024-11-29 12:59:45.200093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:45.470 [2024-11-29 12:59:45.200105] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:45.470 [2024-11-29 12:59:45.200109] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:45.470 [2024-11-29 12:59:45.200113] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:45.470 [2024-11-29 12:59:45.200116] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:45.470 [2024-11-29 12:59:45.200119] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:45.470 [2024-11-29 12:59:45.200124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:45.470 [2024-11-29 12:59:45.200131] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:45.470 [2024-11-29 12:59:45.200135] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:45.470 [2024-11-29 12:59:45.200139] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.470 [2024-11-29 12:59:45.200144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:45.470 [2024-11-29 12:59:45.200150] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:45.470 [2024-11-29 12:59:45.200154] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:45.470 [2024-11-29 12:59:45.200157] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.470 [2024-11-29 12:59:45.200163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:45.471 [2024-11-29 12:59:45.200170] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:45.471 [2024-11-29 12:59:45.200175] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:45.471 [2024-11-29 12:59:45.200178] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:45.471 [2024-11-29 12:59:45.200184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:45.471 [2024-11-29 12:59:45.200190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:45.471 [2024-11-29 12:59:45.200202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:45.471 [2024-11-29 12:59:45.200212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:45.471 [2024-11-29 12:59:45.200218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:45.471 ===================================================== 00:15:45.471 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:45.471 ===================================================== 00:15:45.471 Controller Capabilities/Features 00:15:45.471 ================================ 00:15:45.471 Vendor ID: 4e58 00:15:45.471 Subsystem Vendor ID: 4e58 00:15:45.471 Serial Number: SPDK1 00:15:45.471 Model Number: SPDK bdev Controller 00:15:45.471 Firmware Version: 25.01 00:15:45.471 Recommended Arb Burst: 6 00:15:45.471 IEEE OUI Identifier: 8d 6b 50 00:15:45.471 Multi-path I/O 00:15:45.471 May have multiple subsystem ports: Yes 00:15:45.471 May have multiple controllers: Yes 00:15:45.471 Associated with SR-IOV VF: No 00:15:45.471 Max Data Transfer Size: 131072 00:15:45.471 Max Number of Namespaces: 32 00:15:45.471 Max Number of I/O Queues: 127 00:15:45.471 NVMe Specification Version (VS): 1.3 00:15:45.471 NVMe Specification Version (Identify): 1.3 00:15:45.471 Maximum Queue Entries: 256 00:15:45.471 Contiguous Queues Required: Yes 00:15:45.471 Arbitration Mechanisms Supported 00:15:45.471 Weighted Round Robin: Not Supported 00:15:45.471 Vendor Specific: Not Supported 00:15:45.471 Reset Timeout: 15000 ms 00:15:45.471 Doorbell Stride: 4 bytes 00:15:45.471 NVM Subsystem Reset: Not Supported 00:15:45.471 Command Sets Supported 00:15:45.471 NVM Command Set: Supported 00:15:45.471 Boot Partition: Not Supported 00:15:45.471 Memory Page Size Minimum: 4096 bytes 00:15:45.471 Memory Page Size Maximum: 4096 bytes 00:15:45.471 Persistent Memory Region: Not Supported 00:15:45.471 Optional Asynchronous Events Supported 00:15:45.471 Namespace Attribute Notices: Supported 00:15:45.471 Firmware Activation Notices: Not Supported 00:15:45.471 ANA Change Notices: Not Supported 00:15:45.471 PLE Aggregate Log Change Notices: Not Supported 00:15:45.471 LBA Status Info Alert Notices: Not Supported 00:15:45.471 EGE Aggregate Log Change Notices: Not Supported 00:15:45.471 Normal NVM Subsystem Shutdown event: Not Supported 00:15:45.471 Zone Descriptor Change Notices: Not Supported 00:15:45.471 Discovery Log Change Notices: Not Supported 00:15:45.471 Controller Attributes 00:15:45.471 128-bit Host Identifier: Supported 00:15:45.471 Non-Operational Permissive Mode: Not Supported 00:15:45.471 NVM Sets: Not Supported 00:15:45.471 Read Recovery Levels: Not Supported 00:15:45.471 Endurance Groups: Not Supported 00:15:45.471 Predictable Latency Mode: Not Supported 00:15:45.471 Traffic Based Keep ALive: Not Supported 00:15:45.471 Namespace Granularity: Not Supported 00:15:45.471 SQ Associations: Not Supported 00:15:45.471 UUID List: Not Supported 00:15:45.471 Multi-Domain Subsystem: Not Supported 00:15:45.471 Fixed Capacity Management: Not Supported 00:15:45.471 Variable Capacity Management: Not Supported 00:15:45.471 Delete Endurance Group: Not Supported 00:15:45.471 Delete NVM Set: Not Supported 00:15:45.471 Extended LBA Formats Supported: Not Supported 00:15:45.471 Flexible Data Placement Supported: Not Supported 00:15:45.471 00:15:45.471 Controller Memory Buffer Support 00:15:45.471 ================================ 00:15:45.471 Supported: No 00:15:45.471 00:15:45.471 Persistent Memory Region Support 00:15:45.471 ================================ 00:15:45.471 Supported: No 00:15:45.471 00:15:45.471 Admin Command Set Attributes 00:15:45.471 ============================ 00:15:45.471 Security Send/Receive: Not Supported 00:15:45.471 Format NVM: Not Supported 00:15:45.471 Firmware Activate/Download: Not Supported 00:15:45.471 Namespace Management: Not Supported 00:15:45.471 Device Self-Test: Not Supported 00:15:45.471 Directives: Not Supported 00:15:45.471 NVMe-MI: Not Supported 00:15:45.471 Virtualization Management: Not Supported 00:15:45.471 Doorbell Buffer Config: Not Supported 00:15:45.471 Get LBA Status Capability: Not Supported 00:15:45.471 Command & Feature Lockdown Capability: Not Supported 00:15:45.471 Abort Command Limit: 4 00:15:45.471 Async Event Request Limit: 4 00:15:45.471 Number of Firmware Slots: N/A 00:15:45.471 Firmware Slot 1 Read-Only: N/A 00:15:45.471 Firmware Activation Without Reset: N/A 00:15:45.471 Multiple Update Detection Support: N/A 00:15:45.471 Firmware Update Granularity: No Information Provided 00:15:45.471 Per-Namespace SMART Log: No 00:15:45.471 Asymmetric Namespace Access Log Page: Not Supported 00:15:45.471 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:45.471 Command Effects Log Page: Supported 00:15:45.471 Get Log Page Extended Data: Supported 00:15:45.471 Telemetry Log Pages: Not Supported 00:15:45.471 Persistent Event Log Pages: Not Supported 00:15:45.471 Supported Log Pages Log Page: May Support 00:15:45.471 Commands Supported & Effects Log Page: Not Supported 00:15:45.471 Feature Identifiers & Effects Log Page:May Support 00:15:45.471 NVMe-MI Commands & Effects Log Page: May Support 00:15:45.471 Data Area 4 for Telemetry Log: Not Supported 00:15:45.471 Error Log Page Entries Supported: 128 00:15:45.471 Keep Alive: Supported 00:15:45.471 Keep Alive Granularity: 10000 ms 00:15:45.471 00:15:45.471 NVM Command Set Attributes 00:15:45.471 ========================== 00:15:45.471 Submission Queue Entry Size 00:15:45.471 Max: 64 00:15:45.471 Min: 64 00:15:45.471 Completion Queue Entry Size 00:15:45.471 Max: 16 00:15:45.471 Min: 16 00:15:45.471 Number of Namespaces: 32 00:15:45.471 Compare Command: Supported 00:15:45.471 Write Uncorrectable Command: Not Supported 00:15:45.471 Dataset Management Command: Supported 00:15:45.471 Write Zeroes Command: Supported 00:15:45.471 Set Features Save Field: Not Supported 00:15:45.471 Reservations: Not Supported 00:15:45.471 Timestamp: Not Supported 00:15:45.471 Copy: Supported 00:15:45.471 Volatile Write Cache: Present 00:15:45.471 Atomic Write Unit (Normal): 1 00:15:45.471 Atomic Write Unit (PFail): 1 00:15:45.471 Atomic Compare & Write Unit: 1 00:15:45.471 Fused Compare & Write: Supported 00:15:45.471 Scatter-Gather List 00:15:45.471 SGL Command Set: Supported (Dword aligned) 00:15:45.471 SGL Keyed: Not Supported 00:15:45.471 SGL Bit Bucket Descriptor: Not Supported 00:15:45.471 SGL Metadata Pointer: Not Supported 00:15:45.471 Oversized SGL: Not Supported 00:15:45.471 SGL Metadata Address: Not Supported 00:15:45.471 SGL Offset: Not Supported 00:15:45.471 Transport SGL Data Block: Not Supported 00:15:45.471 Replay Protected Memory Block: Not Supported 00:15:45.471 00:15:45.471 Firmware Slot Information 00:15:45.471 ========================= 00:15:45.471 Active slot: 1 00:15:45.471 Slot 1 Firmware Revision: 25.01 00:15:45.471 00:15:45.471 00:15:45.471 Commands Supported and Effects 00:15:45.471 ============================== 00:15:45.471 Admin Commands 00:15:45.471 -------------- 00:15:45.471 Get Log Page (02h): Supported 00:15:45.471 Identify (06h): Supported 00:15:45.471 Abort (08h): Supported 00:15:45.471 Set Features (09h): Supported 00:15:45.471 Get Features (0Ah): Supported 00:15:45.471 Asynchronous Event Request (0Ch): Supported 00:15:45.471 Keep Alive (18h): Supported 00:15:45.471 I/O Commands 00:15:45.471 ------------ 00:15:45.471 Flush (00h): Supported LBA-Change 00:15:45.471 Write (01h): Supported LBA-Change 00:15:45.471 Read (02h): Supported 00:15:45.471 Compare (05h): Supported 00:15:45.471 Write Zeroes (08h): Supported LBA-Change 00:15:45.471 Dataset Management (09h): Supported LBA-Change 00:15:45.471 Copy (19h): Supported LBA-Change 00:15:45.471 00:15:45.471 Error Log 00:15:45.471 ========= 00:15:45.471 00:15:45.471 Arbitration 00:15:45.471 =========== 00:15:45.471 Arbitration Burst: 1 00:15:45.471 00:15:45.471 Power Management 00:15:45.471 ================ 00:15:45.471 Number of Power States: 1 00:15:45.471 Current Power State: Power State #0 00:15:45.471 Power State #0: 00:15:45.471 Max Power: 0.00 W 00:15:45.471 Non-Operational State: Operational 00:15:45.471 Entry Latency: Not Reported 00:15:45.471 Exit Latency: Not Reported 00:15:45.471 Relative Read Throughput: 0 00:15:45.471 Relative Read Latency: 0 00:15:45.472 Relative Write Throughput: 0 00:15:45.472 Relative Write Latency: 0 00:15:45.472 Idle Power: Not Reported 00:15:45.472 Active Power: Not Reported 00:15:45.472 Non-Operational Permissive Mode: Not Supported 00:15:45.472 00:15:45.472 Health Information 00:15:45.472 ================== 00:15:45.472 Critical Warnings: 00:15:45.472 Available Spare Space: OK 00:15:45.472 Temperature: OK 00:15:45.472 Device Reliability: OK 00:15:45.472 Read Only: No 00:15:45.472 Volatile Memory Backup: OK 00:15:45.472 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:45.472 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:45.472 Available Spare: 0% 00:15:45.472 Available Sp[2024-11-29 12:59:45.200303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:45.472 [2024-11-29 12:59:45.200311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:45.472 [2024-11-29 12:59:45.200337] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:45.472 [2024-11-29 12:59:45.200346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.472 [2024-11-29 12:59:45.200352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.472 [2024-11-29 12:59:45.200358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.472 [2024-11-29 12:59:45.200363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.472 [2024-11-29 12:59:45.202956] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:45.472 [2024-11-29 12:59:45.202969] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:45.472 [2024-11-29 12:59:45.203451] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:45.472 [2024-11-29 12:59:45.203507] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:45.472 [2024-11-29 12:59:45.203513] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:45.472 [2024-11-29 12:59:45.204459] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:45.472 [2024-11-29 12:59:45.204470] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:45.472 [2024-11-29 12:59:45.204519] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:45.472 [2024-11-29 12:59:45.206498] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:45.472 are Threshold: 0% 00:15:45.472 Life Percentage Used: 0% 00:15:45.472 Data Units Read: 0 00:15:45.472 Data Units Written: 0 00:15:45.472 Host Read Commands: 0 00:15:45.472 Host Write Commands: 0 00:15:45.472 Controller Busy Time: 0 minutes 00:15:45.472 Power Cycles: 0 00:15:45.472 Power On Hours: 0 hours 00:15:45.472 Unsafe Shutdowns: 0 00:15:45.472 Unrecoverable Media Errors: 0 00:15:45.472 Lifetime Error Log Entries: 0 00:15:45.472 Warning Temperature Time: 0 minutes 00:15:45.472 Critical Temperature Time: 0 minutes 00:15:45.472 00:15:45.472 Number of Queues 00:15:45.472 ================ 00:15:45.472 Number of I/O Submission Queues: 127 00:15:45.472 Number of I/O Completion Queues: 127 00:15:45.472 00:15:45.472 Active Namespaces 00:15:45.472 ================= 00:15:45.472 Namespace ID:1 00:15:45.472 Error Recovery Timeout: Unlimited 00:15:45.472 Command Set Identifier: NVM (00h) 00:15:45.472 Deallocate: Supported 00:15:45.472 Deallocated/Unwritten Error: Not Supported 00:15:45.472 Deallocated Read Value: Unknown 00:15:45.472 Deallocate in Write Zeroes: Not Supported 00:15:45.472 Deallocated Guard Field: 0xFFFF 00:15:45.472 Flush: Supported 00:15:45.472 Reservation: Supported 00:15:45.472 Namespace Sharing Capabilities: Multiple Controllers 00:15:45.472 Size (in LBAs): 131072 (0GiB) 00:15:45.472 Capacity (in LBAs): 131072 (0GiB) 00:15:45.472 Utilization (in LBAs): 131072 (0GiB) 00:15:45.472 NGUID: CB3D759ED26D4ACE927E9C9E3B8EA203 00:15:45.472 UUID: cb3d759e-d26d-4ace-927e-9c9e3b8ea203 00:15:45.472 Thin Provisioning: Not Supported 00:15:45.472 Per-NS Atomic Units: Yes 00:15:45.472 Atomic Boundary Size (Normal): 0 00:15:45.472 Atomic Boundary Size (PFail): 0 00:15:45.472 Atomic Boundary Offset: 0 00:15:45.472 Maximum Single Source Range Length: 65535 00:15:45.472 Maximum Copy Length: 65535 00:15:45.472 Maximum Source Range Count: 1 00:15:45.472 NGUID/EUI64 Never Reused: No 00:15:45.472 Namespace Write Protected: No 00:15:45.472 Number of LBA Formats: 1 00:15:45.472 Current LBA Format: LBA Format #00 00:15:45.472 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:45.472 00:15:45.472 12:59:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:45.730 [2024-11-29 12:59:45.443793] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:51.000 Initializing NVMe Controllers 00:15:51.000 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:51.000 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:51.000 Initialization complete. Launching workers. 00:15:51.000 ======================================================== 00:15:51.000 Latency(us) 00:15:51.000 Device Information : IOPS MiB/s Average min max 00:15:51.000 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39829.55 155.58 3213.50 1001.47 6625.10 00:15:51.000 ======================================================== 00:15:51.000 Total : 39829.55 155.58 3213.50 1001.47 6625.10 00:15:51.001 00:15:51.001 [2024-11-29 12:59:50.463627] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:51.001 12:59:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:51.001 [2024-11-29 12:59:50.702738] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:56.269 Initializing NVMe Controllers 00:15:56.269 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:56.269 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:56.269 Initialization complete. Launching workers. 00:15:56.269 ======================================================== 00:15:56.269 Latency(us) 00:15:56.269 Device Information : IOPS MiB/s Average min max 00:15:56.269 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16042.39 62.67 7978.20 4984.24 10974.10 00:15:56.269 ======================================================== 00:15:56.269 Total : 16042.39 62.67 7978.20 4984.24 10974.10 00:15:56.269 00:15:56.269 [2024-11-29 12:59:55.739178] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:56.269 12:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:56.269 [2024-11-29 12:59:55.954168] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:01.541 [2024-11-29 13:00:01.025236] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:01.541 Initializing NVMe Controllers 00:16:01.541 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:01.541 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:01.541 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:01.541 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:01.541 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:01.541 Initialization complete. Launching workers. 00:16:01.541 Starting thread on core 2 00:16:01.541 Starting thread on core 3 00:16:01.541 Starting thread on core 1 00:16:01.541 13:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:01.541 [2024-11-29 13:00:01.312398] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:04.823 [2024-11-29 13:00:04.369209] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:04.823 Initializing NVMe Controllers 00:16:04.823 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:04.823 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:04.823 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:04.823 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:04.823 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:04.823 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:04.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:04.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:04.823 Initialization complete. Launching workers. 00:16:04.823 Starting thread on core 1 with urgent priority queue 00:16:04.823 Starting thread on core 2 with urgent priority queue 00:16:04.823 Starting thread on core 3 with urgent priority queue 00:16:04.823 Starting thread on core 0 with urgent priority queue 00:16:04.823 SPDK bdev Controller (SPDK1 ) core 0: 8075.33 IO/s 12.38 secs/100000 ios 00:16:04.823 SPDK bdev Controller (SPDK1 ) core 1: 9638.33 IO/s 10.38 secs/100000 ios 00:16:04.823 SPDK bdev Controller (SPDK1 ) core 2: 8328.33 IO/s 12.01 secs/100000 ios 00:16:04.823 SPDK bdev Controller (SPDK1 ) core 3: 7766.33 IO/s 12.88 secs/100000 ios 00:16:04.823 ======================================================== 00:16:04.823 00:16:04.823 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:05.082 [2024-11-29 13:00:04.657397] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:05.082 Initializing NVMe Controllers 00:16:05.082 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:05.082 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:05.082 Namespace ID: 1 size: 0GB 00:16:05.082 Initialization complete. 00:16:05.082 INFO: using host memory buffer for IO 00:16:05.082 Hello world! 00:16:05.082 [2024-11-29 13:00:04.693691] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:05.082 13:00:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:05.340 [2024-11-29 13:00:04.979342] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:06.275 Initializing NVMe Controllers 00:16:06.275 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:06.275 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:06.275 Initialization complete. Launching workers. 00:16:06.275 submit (in ns) avg, min, max = 7958.2, 3248.7, 4001473.9 00:16:06.275 complete (in ns) avg, min, max = 19274.1, 1769.6, 3999577.4 00:16:06.275 00:16:06.275 Submit histogram 00:16:06.275 ================ 00:16:06.275 Range in us Cumulative Count 00:16:06.275 3.242 - 3.256: 0.0187% ( 3) 00:16:06.275 3.256 - 3.270: 0.0684% ( 8) 00:16:06.275 3.270 - 3.283: 0.1990% ( 21) 00:16:06.275 3.283 - 3.297: 0.7090% ( 82) 00:16:06.275 3.297 - 3.311: 1.9279% ( 196) 00:16:06.275 3.311 - 3.325: 3.4453% ( 244) 00:16:06.275 3.325 - 3.339: 5.5597% ( 340) 00:16:06.276 3.339 - 3.353: 9.3408% ( 608) 00:16:06.276 3.353 - 3.367: 14.6704% ( 857) 00:16:06.276 3.367 - 3.381: 20.3545% ( 914) 00:16:06.276 3.381 - 3.395: 26.7040% ( 1021) 00:16:06.276 3.395 - 3.409: 32.4565% ( 925) 00:16:06.276 3.409 - 3.423: 38.0784% ( 904) 00:16:06.276 3.423 - 3.437: 43.1343% ( 813) 00:16:06.276 3.437 - 3.450: 48.9055% ( 928) 00:16:06.276 3.450 - 3.464: 53.7624% ( 781) 00:16:06.276 3.464 - 3.478: 57.5187% ( 604) 00:16:06.276 3.478 - 3.492: 62.6057% ( 818) 00:16:06.276 3.492 - 3.506: 69.2537% ( 1069) 00:16:06.276 3.506 - 3.520: 73.7500% ( 723) 00:16:06.276 3.520 - 3.534: 77.1269% ( 543) 00:16:06.276 3.534 - 3.548: 81.6791% ( 732) 00:16:06.276 3.548 - 3.562: 84.5398% ( 460) 00:16:06.276 3.562 - 3.590: 87.5995% ( 492) 00:16:06.276 3.590 - 3.617: 88.5261% ( 149) 00:16:06.276 3.617 - 3.645: 89.4279% ( 145) 00:16:06.276 3.645 - 3.673: 90.9204% ( 240) 00:16:06.276 3.673 - 3.701: 92.7052% ( 287) 00:16:06.276 3.701 - 3.729: 94.3968% ( 272) 00:16:06.276 3.729 - 3.757: 95.8955% ( 241) 00:16:06.276 3.757 - 3.784: 97.3259% ( 230) 00:16:06.276 3.784 - 3.812: 98.3955% ( 172) 00:16:06.276 3.812 - 3.840: 98.9055% ( 82) 00:16:06.276 3.840 - 3.868: 99.2848% ( 61) 00:16:06.276 3.868 - 3.896: 99.4776% ( 31) 00:16:06.276 3.896 - 3.923: 99.5958% ( 19) 00:16:06.276 3.923 - 3.951: 99.6082% ( 2) 00:16:06.276 3.979 - 4.007: 99.6144% ( 1) 00:16:06.276 5.009 - 5.037: 99.6206% ( 1) 00:16:06.276 5.816 - 5.843: 99.6269% ( 1) 00:16:06.276 5.843 - 5.871: 99.6331% ( 1) 00:16:06.276 5.871 - 5.899: 99.6393% ( 1) 00:16:06.276 5.899 - 5.927: 99.6455% ( 1) 00:16:06.276 6.010 - 6.038: 99.6517% ( 1) 00:16:06.276 6.261 - 6.289: 99.6580% ( 1) 00:16:06.276 6.289 - 6.317: 99.6642% ( 1) 00:16:06.276 6.734 - 6.762: 99.6704% ( 1) 00:16:06.276 6.817 - 6.845: 99.6766% ( 1) 00:16:06.276 6.845 - 6.873: 99.6891% ( 2) 00:16:06.276 6.873 - 6.901: 99.6953% ( 1) 00:16:06.276 6.901 - 6.929: 99.7015% ( 1) 00:16:06.276 6.929 - 6.957: 99.7077% ( 1) 00:16:06.276 6.984 - 7.012: 99.7139% ( 1) 00:16:06.276 7.068 - 7.096: 99.7201% ( 1) 00:16:06.276 7.235 - 7.290: 99.7450% ( 4) 00:16:06.276 7.346 - 7.402: 99.7575% ( 2) 00:16:06.276 7.402 - 7.457: 99.7637% ( 1) 00:16:06.276 7.513 - 7.569: 99.7761% ( 2) 00:16:06.276 7.624 - 7.680: 99.7886% ( 2) 00:16:06.276 7.847 - 7.903: 99.7948% ( 1) 00:16:06.276 7.903 - 7.958: 99.8010% ( 1) 00:16:06.276 8.014 - 8.070: 99.8072% ( 1) 00:16:06.276 8.125 - 8.181: 99.8134% ( 1) 00:16:06.276 8.292 - 8.348: 99.8197% ( 1) 00:16:06.276 8.459 - 8.515: 99.8321% ( 2) 00:16:06.276 8.570 - 8.626: 99.8445% ( 2) 00:16:06.276 8.737 - 8.793: 99.8507% ( 1) 00:16:06.276 10.184 - 10.240: 99.8570% ( 1) 00:16:06.276 10.240 - 10.296: 99.8632% ( 1) 00:16:06.276 10.518 - 10.574: 99.8694% ( 1) 00:16:06.276 11.631 - 11.687: 99.8756% ( 1) 00:16:06.276 11.743 - 11.798: 99.8818% ( 1) 00:16:06.276 12.188 - 12.243: 99.8881% ( 1) 00:16:06.276 3989.148 - 4017.642: 100.0000% ( 18) 00:16:06.276 00:16:06.276 Complete histogram 00:16:06.276 ================== 00:16:06.276 Range in us Cumulative Count 00:16:06.276 1.767 - 1.774: 0.0311% ( 5) 00:16:06.276 1.774 - 1.781: 0.0373% ( 1) 00:16:06.276 1.781 - 1.795: 0.0933% ( 9) 00:16:06.276 1.795 - [2024-11-29 13:00:06.001208] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:06.276 1.809: 0.1244% ( 5) 00:16:06.276 1.809 - 1.823: 1.5299% ( 226) 00:16:06.276 1.823 - 1.837: 18.7624% ( 2771) 00:16:06.276 1.837 - 1.850: 29.9689% ( 1802) 00:16:06.276 1.850 - 1.864: 33.2960% ( 535) 00:16:06.276 1.864 - 1.878: 40.5721% ( 1170) 00:16:06.276 1.878 - 1.892: 76.1194% ( 5716) 00:16:06.276 1.892 - 1.906: 91.7351% ( 2511) 00:16:06.276 1.906 - 1.920: 96.1194% ( 705) 00:16:06.276 1.920 - 1.934: 97.1020% ( 158) 00:16:06.276 1.934 - 1.948: 97.5933% ( 79) 00:16:06.276 1.948 - 1.962: 98.3955% ( 129) 00:16:06.276 1.962 - 1.976: 99.0734% ( 109) 00:16:06.276 1.976 - 1.990: 99.2662% ( 31) 00:16:06.276 1.990 - 2.003: 99.3221% ( 9) 00:16:06.276 2.017 - 2.031: 99.3346% ( 2) 00:16:06.276 2.393 - 2.407: 99.3408% ( 1) 00:16:06.276 4.424 - 4.452: 99.3470% ( 1) 00:16:06.276 4.480 - 4.508: 99.3532% ( 1) 00:16:06.276 4.591 - 4.619: 99.3657% ( 2) 00:16:06.276 4.981 - 5.009: 99.3719% ( 1) 00:16:06.276 5.120 - 5.148: 99.3781% ( 1) 00:16:06.276 5.176 - 5.203: 99.3843% ( 1) 00:16:06.276 5.315 - 5.343: 99.3905% ( 1) 00:16:06.276 5.343 - 5.370: 99.4030% ( 2) 00:16:06.276 5.426 - 5.454: 99.4092% ( 1) 00:16:06.276 5.454 - 5.482: 99.4154% ( 1) 00:16:06.276 5.510 - 5.537: 99.4216% ( 1) 00:16:06.276 5.537 - 5.565: 99.4341% ( 2) 00:16:06.276 5.621 - 5.649: 99.4403% ( 1) 00:16:06.276 5.788 - 5.816: 99.4465% ( 1) 00:16:06.276 5.816 - 5.843: 99.4590% ( 2) 00:16:06.276 6.010 - 6.038: 99.4652% ( 1) 00:16:06.276 6.066 - 6.094: 99.4714% ( 1) 00:16:06.276 6.177 - 6.205: 99.4776% ( 1) 00:16:06.276 6.344 - 6.372: 99.4838% ( 1) 00:16:06.276 6.456 - 6.483: 99.4900% ( 1) 00:16:06.276 6.511 - 6.539: 99.5025% ( 2) 00:16:06.276 6.623 - 6.650: 99.5087% ( 1) 00:16:06.276 6.706 - 6.734: 99.5149% ( 1) 00:16:06.276 6.762 - 6.790: 99.5211% ( 1) 00:16:06.276 7.123 - 7.179: 99.5274% ( 1) 00:16:06.276 7.179 - 7.235: 99.5336% ( 1) 00:16:06.276 7.290 - 7.346: 99.5398% ( 1) 00:16:06.276 7.402 - 7.457: 99.5460% ( 1) 00:16:06.276 7.457 - 7.513: 99.5585% ( 2) 00:16:06.276 9.517 - 9.572: 99.5647% ( 1) 00:16:06.276 3989.148 - 4017.642: 100.0000% ( 70) 00:16:06.276 00:16:06.276 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:06.276 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:06.276 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:06.276 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:06.276 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:06.536 [ 00:16:06.536 { 00:16:06.536 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:06.536 "subtype": "Discovery", 00:16:06.536 "listen_addresses": [], 00:16:06.536 "allow_any_host": true, 00:16:06.536 "hosts": [] 00:16:06.536 }, 00:16:06.536 { 00:16:06.536 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:06.536 "subtype": "NVMe", 00:16:06.536 "listen_addresses": [ 00:16:06.536 { 00:16:06.536 "trtype": "VFIOUSER", 00:16:06.536 "adrfam": "IPv4", 00:16:06.536 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:06.536 "trsvcid": "0" 00:16:06.536 } 00:16:06.536 ], 00:16:06.536 "allow_any_host": true, 00:16:06.536 "hosts": [], 00:16:06.536 "serial_number": "SPDK1", 00:16:06.536 "model_number": "SPDK bdev Controller", 00:16:06.536 "max_namespaces": 32, 00:16:06.536 "min_cntlid": 1, 00:16:06.536 "max_cntlid": 65519, 00:16:06.536 "namespaces": [ 00:16:06.536 { 00:16:06.536 "nsid": 1, 00:16:06.536 "bdev_name": "Malloc1", 00:16:06.536 "name": "Malloc1", 00:16:06.536 "nguid": "CB3D759ED26D4ACE927E9C9E3B8EA203", 00:16:06.536 "uuid": "cb3d759e-d26d-4ace-927e-9c9e3b8ea203" 00:16:06.536 } 00:16:06.536 ] 00:16:06.536 }, 00:16:06.536 { 00:16:06.536 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:06.536 "subtype": "NVMe", 00:16:06.536 "listen_addresses": [ 00:16:06.536 { 00:16:06.536 "trtype": "VFIOUSER", 00:16:06.536 "adrfam": "IPv4", 00:16:06.536 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:06.536 "trsvcid": "0" 00:16:06.536 } 00:16:06.536 ], 00:16:06.536 "allow_any_host": true, 00:16:06.536 "hosts": [], 00:16:06.536 "serial_number": "SPDK2", 00:16:06.536 "model_number": "SPDK bdev Controller", 00:16:06.536 "max_namespaces": 32, 00:16:06.536 "min_cntlid": 1, 00:16:06.536 "max_cntlid": 65519, 00:16:06.536 "namespaces": [ 00:16:06.536 { 00:16:06.536 "nsid": 1, 00:16:06.536 "bdev_name": "Malloc2", 00:16:06.536 "name": "Malloc2", 00:16:06.536 "nguid": "2AA3D9755BDE4AD486A5FD65A23AD186", 00:16:06.536 "uuid": "2aa3d975-5bde-4ad4-86a5-fd65a23ad186" 00:16:06.536 } 00:16:06.536 ] 00:16:06.536 } 00:16:06.536 ] 00:16:06.536 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:06.536 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1960721 00:16:06.536 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:06.536 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:06.536 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:06.536 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:06.536 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:06.536 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:06.536 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:06.536 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:06.820 [2024-11-29 13:00:06.428425] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:06.820 Malloc3 00:16:06.820 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:07.080 [2024-11-29 13:00:06.671178] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:07.080 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:07.080 Asynchronous Event Request test 00:16:07.080 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:07.080 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:07.080 Registering asynchronous event callbacks... 00:16:07.080 Starting namespace attribute notice tests for all controllers... 00:16:07.080 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:07.080 aer_cb - Changed Namespace 00:16:07.080 Cleaning up... 00:16:07.080 [ 00:16:07.080 { 00:16:07.080 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:07.080 "subtype": "Discovery", 00:16:07.080 "listen_addresses": [], 00:16:07.080 "allow_any_host": true, 00:16:07.080 "hosts": [] 00:16:07.080 }, 00:16:07.080 { 00:16:07.080 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:07.080 "subtype": "NVMe", 00:16:07.080 "listen_addresses": [ 00:16:07.080 { 00:16:07.080 "trtype": "VFIOUSER", 00:16:07.080 "adrfam": "IPv4", 00:16:07.080 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:07.080 "trsvcid": "0" 00:16:07.080 } 00:16:07.080 ], 00:16:07.080 "allow_any_host": true, 00:16:07.080 "hosts": [], 00:16:07.080 "serial_number": "SPDK1", 00:16:07.080 "model_number": "SPDK bdev Controller", 00:16:07.080 "max_namespaces": 32, 00:16:07.080 "min_cntlid": 1, 00:16:07.080 "max_cntlid": 65519, 00:16:07.080 "namespaces": [ 00:16:07.080 { 00:16:07.080 "nsid": 1, 00:16:07.080 "bdev_name": "Malloc1", 00:16:07.080 "name": "Malloc1", 00:16:07.080 "nguid": "CB3D759ED26D4ACE927E9C9E3B8EA203", 00:16:07.080 "uuid": "cb3d759e-d26d-4ace-927e-9c9e3b8ea203" 00:16:07.080 }, 00:16:07.080 { 00:16:07.080 "nsid": 2, 00:16:07.080 "bdev_name": "Malloc3", 00:16:07.080 "name": "Malloc3", 00:16:07.080 "nguid": "4E78B045B704476F8068B3DF709EBE60", 00:16:07.080 "uuid": "4e78b045-b704-476f-8068-b3df709ebe60" 00:16:07.080 } 00:16:07.080 ] 00:16:07.080 }, 00:16:07.080 { 00:16:07.080 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:07.080 "subtype": "NVMe", 00:16:07.080 "listen_addresses": [ 00:16:07.080 { 00:16:07.080 "trtype": "VFIOUSER", 00:16:07.080 "adrfam": "IPv4", 00:16:07.080 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:07.080 "trsvcid": "0" 00:16:07.080 } 00:16:07.080 ], 00:16:07.080 "allow_any_host": true, 00:16:07.080 "hosts": [], 00:16:07.080 "serial_number": "SPDK2", 00:16:07.080 "model_number": "SPDK bdev Controller", 00:16:07.080 "max_namespaces": 32, 00:16:07.080 "min_cntlid": 1, 00:16:07.080 "max_cntlid": 65519, 00:16:07.080 "namespaces": [ 00:16:07.080 { 00:16:07.080 "nsid": 1, 00:16:07.080 "bdev_name": "Malloc2", 00:16:07.080 "name": "Malloc2", 00:16:07.080 "nguid": "2AA3D9755BDE4AD486A5FD65A23AD186", 00:16:07.080 "uuid": "2aa3d975-5bde-4ad4-86a5-fd65a23ad186" 00:16:07.080 } 00:16:07.080 ] 00:16:07.080 } 00:16:07.080 ] 00:16:07.080 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1960721 00:16:07.080 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:07.080 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:07.080 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:07.080 13:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:07.342 [2024-11-29 13:00:06.908532] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:16:07.342 [2024-11-29 13:00:06.908557] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1960770 ] 00:16:07.342 [2024-11-29 13:00:06.947755] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:07.342 [2024-11-29 13:00:06.952003] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:07.342 [2024-11-29 13:00:06.952026] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2440a51000 00:16:07.342 [2024-11-29 13:00:06.953004] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.342 [2024-11-29 13:00:06.954006] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.342 [2024-11-29 13:00:06.955010] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.342 [2024-11-29 13:00:06.956015] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:07.342 [2024-11-29 13:00:06.957019] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:07.342 [2024-11-29 13:00:06.958031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.342 [2024-11-29 13:00:06.959037] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:07.342 [2024-11-29 13:00:06.960045] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:07.342 [2024-11-29 13:00:06.961052] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:07.342 [2024-11-29 13:00:06.961065] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2440a46000 00:16:07.342 [2024-11-29 13:00:06.962010] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:07.342 [2024-11-29 13:00:06.972532] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:07.342 [2024-11-29 13:00:06.972557] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:07.342 [2024-11-29 13:00:06.977646] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:07.342 [2024-11-29 13:00:06.977683] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:07.342 [2024-11-29 13:00:06.977756] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:07.342 [2024-11-29 13:00:06.977769] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:07.342 [2024-11-29 13:00:06.977774] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:07.342 [2024-11-29 13:00:06.978649] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:07.342 [2024-11-29 13:00:06.978661] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:07.342 [2024-11-29 13:00:06.978668] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:07.342 [2024-11-29 13:00:06.979656] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:07.342 [2024-11-29 13:00:06.979664] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:07.342 [2024-11-29 13:00:06.979671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:07.342 [2024-11-29 13:00:06.980662] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:07.342 [2024-11-29 13:00:06.980671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:07.342 [2024-11-29 13:00:06.981668] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:07.342 [2024-11-29 13:00:06.981677] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:07.342 [2024-11-29 13:00:06.981682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:07.342 [2024-11-29 13:00:06.981688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:07.342 [2024-11-29 13:00:06.981796] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:07.342 [2024-11-29 13:00:06.981800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:07.342 [2024-11-29 13:00:06.981805] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:07.342 [2024-11-29 13:00:06.982671] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:07.342 [2024-11-29 13:00:06.983685] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:07.342 [2024-11-29 13:00:06.984692] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:07.342 [2024-11-29 13:00:06.985697] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:07.342 [2024-11-29 13:00:06.985737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:07.342 [2024-11-29 13:00:06.986715] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:07.342 [2024-11-29 13:00:06.986723] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:07.342 [2024-11-29 13:00:06.986728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:06.986745] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:07.343 [2024-11-29 13:00:06.986752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:06.986766] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:07.343 [2024-11-29 13:00:06.986771] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:07.343 [2024-11-29 13:00:06.986774] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.343 [2024-11-29 13:00:06.986786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:07.343 [2024-11-29 13:00:06.993958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:07.343 [2024-11-29 13:00:06.993969] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:07.343 [2024-11-29 13:00:06.993974] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:07.343 [2024-11-29 13:00:06.993978] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:07.343 [2024-11-29 13:00:06.993983] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:07.343 [2024-11-29 13:00:06.993987] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:07.343 [2024-11-29 13:00:06.993991] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:07.343 [2024-11-29 13:00:06.993996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:06.994003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:06.994013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:07.343 [2024-11-29 13:00:07.001958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:07.343 [2024-11-29 13:00:07.001970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.343 [2024-11-29 13:00:07.001981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.343 [2024-11-29 13:00:07.001989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.343 [2024-11-29 13:00:07.001997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.343 [2024-11-29 13:00:07.002001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.002010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.002018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:07.343 [2024-11-29 13:00:07.009952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:07.343 [2024-11-29 13:00:07.009960] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:07.343 [2024-11-29 13:00:07.009965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.009975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.009981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.009989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:07.343 [2024-11-29 13:00:07.017955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:07.343 [2024-11-29 13:00:07.018010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.018018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.018026] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:07.343 [2024-11-29 13:00:07.018030] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:07.343 [2024-11-29 13:00:07.018034] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.343 [2024-11-29 13:00:07.018040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:07.343 [2024-11-29 13:00:07.025955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:07.343 [2024-11-29 13:00:07.025973] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:07.343 [2024-11-29 13:00:07.025980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.025987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.025994] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:07.343 [2024-11-29 13:00:07.025998] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:07.343 [2024-11-29 13:00:07.026001] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.343 [2024-11-29 13:00:07.026009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:07.343 [2024-11-29 13:00:07.033955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:07.343 [2024-11-29 13:00:07.033966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.033973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.033980] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:07.343 [2024-11-29 13:00:07.033984] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:07.343 [2024-11-29 13:00:07.033987] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.343 [2024-11-29 13:00:07.033993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:07.343 [2024-11-29 13:00:07.041952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:07.343 [2024-11-29 13:00:07.041964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.041971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.041978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.041983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.041987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.041992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.041997] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:07.343 [2024-11-29 13:00:07.042001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:07.343 [2024-11-29 13:00:07.042006] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:07.343 [2024-11-29 13:00:07.042021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:07.343 [2024-11-29 13:00:07.049952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:07.343 [2024-11-29 13:00:07.049965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:07.343 [2024-11-29 13:00:07.057953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:07.343 [2024-11-29 13:00:07.057965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:07.343 [2024-11-29 13:00:07.065952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:07.343 [2024-11-29 13:00:07.065964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:07.343 [2024-11-29 13:00:07.073953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:07.343 [2024-11-29 13:00:07.073968] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:07.343 [2024-11-29 13:00:07.073973] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:07.343 [2024-11-29 13:00:07.073976] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:07.343 [2024-11-29 13:00:07.073980] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:07.343 [2024-11-29 13:00:07.073983] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:07.343 [2024-11-29 13:00:07.073989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:07.343 [2024-11-29 13:00:07.073996] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:07.343 [2024-11-29 13:00:07.074000] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:07.343 [2024-11-29 13:00:07.074003] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.343 [2024-11-29 13:00:07.074008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:07.344 [2024-11-29 13:00:07.074015] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:07.344 [2024-11-29 13:00:07.074019] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:07.344 [2024-11-29 13:00:07.074022] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.344 [2024-11-29 13:00:07.074027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:07.344 [2024-11-29 13:00:07.074034] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:07.344 [2024-11-29 13:00:07.074038] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:07.344 [2024-11-29 13:00:07.074041] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:07.344 [2024-11-29 13:00:07.074047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:07.344 [2024-11-29 13:00:07.081952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:07.344 [2024-11-29 13:00:07.081967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:07.344 [2024-11-29 13:00:07.081977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:07.344 [2024-11-29 13:00:07.081983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:07.344 ===================================================== 00:16:07.344 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:07.344 ===================================================== 00:16:07.344 Controller Capabilities/Features 00:16:07.344 ================================ 00:16:07.344 Vendor ID: 4e58 00:16:07.344 Subsystem Vendor ID: 4e58 00:16:07.344 Serial Number: SPDK2 00:16:07.344 Model Number: SPDK bdev Controller 00:16:07.344 Firmware Version: 25.01 00:16:07.344 Recommended Arb Burst: 6 00:16:07.344 IEEE OUI Identifier: 8d 6b 50 00:16:07.344 Multi-path I/O 00:16:07.344 May have multiple subsystem ports: Yes 00:16:07.344 May have multiple controllers: Yes 00:16:07.344 Associated with SR-IOV VF: No 00:16:07.344 Max Data Transfer Size: 131072 00:16:07.344 Max Number of Namespaces: 32 00:16:07.344 Max Number of I/O Queues: 127 00:16:07.344 NVMe Specification Version (VS): 1.3 00:16:07.344 NVMe Specification Version (Identify): 1.3 00:16:07.344 Maximum Queue Entries: 256 00:16:07.344 Contiguous Queues Required: Yes 00:16:07.344 Arbitration Mechanisms Supported 00:16:07.344 Weighted Round Robin: Not Supported 00:16:07.344 Vendor Specific: Not Supported 00:16:07.344 Reset Timeout: 15000 ms 00:16:07.344 Doorbell Stride: 4 bytes 00:16:07.344 NVM Subsystem Reset: Not Supported 00:16:07.344 Command Sets Supported 00:16:07.344 NVM Command Set: Supported 00:16:07.344 Boot Partition: Not Supported 00:16:07.344 Memory Page Size Minimum: 4096 bytes 00:16:07.344 Memory Page Size Maximum: 4096 bytes 00:16:07.344 Persistent Memory Region: Not Supported 00:16:07.344 Optional Asynchronous Events Supported 00:16:07.344 Namespace Attribute Notices: Supported 00:16:07.344 Firmware Activation Notices: Not Supported 00:16:07.344 ANA Change Notices: Not Supported 00:16:07.344 PLE Aggregate Log Change Notices: Not Supported 00:16:07.344 LBA Status Info Alert Notices: Not Supported 00:16:07.344 EGE Aggregate Log Change Notices: Not Supported 00:16:07.344 Normal NVM Subsystem Shutdown event: Not Supported 00:16:07.344 Zone Descriptor Change Notices: Not Supported 00:16:07.344 Discovery Log Change Notices: Not Supported 00:16:07.344 Controller Attributes 00:16:07.344 128-bit Host Identifier: Supported 00:16:07.344 Non-Operational Permissive Mode: Not Supported 00:16:07.344 NVM Sets: Not Supported 00:16:07.344 Read Recovery Levels: Not Supported 00:16:07.344 Endurance Groups: Not Supported 00:16:07.344 Predictable Latency Mode: Not Supported 00:16:07.344 Traffic Based Keep ALive: Not Supported 00:16:07.344 Namespace Granularity: Not Supported 00:16:07.344 SQ Associations: Not Supported 00:16:07.344 UUID List: Not Supported 00:16:07.344 Multi-Domain Subsystem: Not Supported 00:16:07.344 Fixed Capacity Management: Not Supported 00:16:07.344 Variable Capacity Management: Not Supported 00:16:07.344 Delete Endurance Group: Not Supported 00:16:07.344 Delete NVM Set: Not Supported 00:16:07.344 Extended LBA Formats Supported: Not Supported 00:16:07.344 Flexible Data Placement Supported: Not Supported 00:16:07.344 00:16:07.344 Controller Memory Buffer Support 00:16:07.344 ================================ 00:16:07.344 Supported: No 00:16:07.344 00:16:07.344 Persistent Memory Region Support 00:16:07.344 ================================ 00:16:07.344 Supported: No 00:16:07.344 00:16:07.344 Admin Command Set Attributes 00:16:07.344 ============================ 00:16:07.344 Security Send/Receive: Not Supported 00:16:07.344 Format NVM: Not Supported 00:16:07.344 Firmware Activate/Download: Not Supported 00:16:07.344 Namespace Management: Not Supported 00:16:07.344 Device Self-Test: Not Supported 00:16:07.344 Directives: Not Supported 00:16:07.344 NVMe-MI: Not Supported 00:16:07.344 Virtualization Management: Not Supported 00:16:07.344 Doorbell Buffer Config: Not Supported 00:16:07.344 Get LBA Status Capability: Not Supported 00:16:07.344 Command & Feature Lockdown Capability: Not Supported 00:16:07.344 Abort Command Limit: 4 00:16:07.344 Async Event Request Limit: 4 00:16:07.344 Number of Firmware Slots: N/A 00:16:07.344 Firmware Slot 1 Read-Only: N/A 00:16:07.344 Firmware Activation Without Reset: N/A 00:16:07.344 Multiple Update Detection Support: N/A 00:16:07.344 Firmware Update Granularity: No Information Provided 00:16:07.344 Per-Namespace SMART Log: No 00:16:07.344 Asymmetric Namespace Access Log Page: Not Supported 00:16:07.344 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:07.344 Command Effects Log Page: Supported 00:16:07.344 Get Log Page Extended Data: Supported 00:16:07.344 Telemetry Log Pages: Not Supported 00:16:07.344 Persistent Event Log Pages: Not Supported 00:16:07.344 Supported Log Pages Log Page: May Support 00:16:07.344 Commands Supported & Effects Log Page: Not Supported 00:16:07.344 Feature Identifiers & Effects Log Page:May Support 00:16:07.344 NVMe-MI Commands & Effects Log Page: May Support 00:16:07.344 Data Area 4 for Telemetry Log: Not Supported 00:16:07.344 Error Log Page Entries Supported: 128 00:16:07.344 Keep Alive: Supported 00:16:07.344 Keep Alive Granularity: 10000 ms 00:16:07.344 00:16:07.344 NVM Command Set Attributes 00:16:07.344 ========================== 00:16:07.344 Submission Queue Entry Size 00:16:07.344 Max: 64 00:16:07.344 Min: 64 00:16:07.344 Completion Queue Entry Size 00:16:07.344 Max: 16 00:16:07.344 Min: 16 00:16:07.344 Number of Namespaces: 32 00:16:07.344 Compare Command: Supported 00:16:07.344 Write Uncorrectable Command: Not Supported 00:16:07.344 Dataset Management Command: Supported 00:16:07.344 Write Zeroes Command: Supported 00:16:07.344 Set Features Save Field: Not Supported 00:16:07.344 Reservations: Not Supported 00:16:07.344 Timestamp: Not Supported 00:16:07.344 Copy: Supported 00:16:07.344 Volatile Write Cache: Present 00:16:07.344 Atomic Write Unit (Normal): 1 00:16:07.344 Atomic Write Unit (PFail): 1 00:16:07.344 Atomic Compare & Write Unit: 1 00:16:07.344 Fused Compare & Write: Supported 00:16:07.344 Scatter-Gather List 00:16:07.344 SGL Command Set: Supported (Dword aligned) 00:16:07.344 SGL Keyed: Not Supported 00:16:07.344 SGL Bit Bucket Descriptor: Not Supported 00:16:07.344 SGL Metadata Pointer: Not Supported 00:16:07.344 Oversized SGL: Not Supported 00:16:07.344 SGL Metadata Address: Not Supported 00:16:07.344 SGL Offset: Not Supported 00:16:07.344 Transport SGL Data Block: Not Supported 00:16:07.344 Replay Protected Memory Block: Not Supported 00:16:07.344 00:16:07.344 Firmware Slot Information 00:16:07.344 ========================= 00:16:07.344 Active slot: 1 00:16:07.344 Slot 1 Firmware Revision: 25.01 00:16:07.344 00:16:07.344 00:16:07.344 Commands Supported and Effects 00:16:07.344 ============================== 00:16:07.344 Admin Commands 00:16:07.344 -------------- 00:16:07.344 Get Log Page (02h): Supported 00:16:07.344 Identify (06h): Supported 00:16:07.344 Abort (08h): Supported 00:16:07.344 Set Features (09h): Supported 00:16:07.344 Get Features (0Ah): Supported 00:16:07.344 Asynchronous Event Request (0Ch): Supported 00:16:07.344 Keep Alive (18h): Supported 00:16:07.344 I/O Commands 00:16:07.344 ------------ 00:16:07.344 Flush (00h): Supported LBA-Change 00:16:07.344 Write (01h): Supported LBA-Change 00:16:07.344 Read (02h): Supported 00:16:07.344 Compare (05h): Supported 00:16:07.344 Write Zeroes (08h): Supported LBA-Change 00:16:07.344 Dataset Management (09h): Supported LBA-Change 00:16:07.344 Copy (19h): Supported LBA-Change 00:16:07.344 00:16:07.344 Error Log 00:16:07.344 ========= 00:16:07.345 00:16:07.345 Arbitration 00:16:07.345 =========== 00:16:07.345 Arbitration Burst: 1 00:16:07.345 00:16:07.345 Power Management 00:16:07.345 ================ 00:16:07.345 Number of Power States: 1 00:16:07.345 Current Power State: Power State #0 00:16:07.345 Power State #0: 00:16:07.345 Max Power: 0.00 W 00:16:07.345 Non-Operational State: Operational 00:16:07.345 Entry Latency: Not Reported 00:16:07.345 Exit Latency: Not Reported 00:16:07.345 Relative Read Throughput: 0 00:16:07.345 Relative Read Latency: 0 00:16:07.345 Relative Write Throughput: 0 00:16:07.345 Relative Write Latency: 0 00:16:07.345 Idle Power: Not Reported 00:16:07.345 Active Power: Not Reported 00:16:07.345 Non-Operational Permissive Mode: Not Supported 00:16:07.345 00:16:07.345 Health Information 00:16:07.345 ================== 00:16:07.345 Critical Warnings: 00:16:07.345 Available Spare Space: OK 00:16:07.345 Temperature: OK 00:16:07.345 Device Reliability: OK 00:16:07.345 Read Only: No 00:16:07.345 Volatile Memory Backup: OK 00:16:07.345 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:07.345 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:07.345 Available Spare: 0% 00:16:07.345 Available Sp[2024-11-29 13:00:07.082075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:07.345 [2024-11-29 13:00:07.089954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:07.345 [2024-11-29 13:00:07.089985] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:07.345 [2024-11-29 13:00:07.089995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.345 [2024-11-29 13:00:07.090001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.345 [2024-11-29 13:00:07.090008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.345 [2024-11-29 13:00:07.090014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.345 [2024-11-29 13:00:07.090066] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:07.345 [2024-11-29 13:00:07.090078] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:07.345 [2024-11-29 13:00:07.091065] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:07.345 [2024-11-29 13:00:07.091109] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:07.345 [2024-11-29 13:00:07.091116] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:07.345 [2024-11-29 13:00:07.092070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:07.345 [2024-11-29 13:00:07.092081] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:07.345 [2024-11-29 13:00:07.092127] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:07.345 [2024-11-29 13:00:07.093109] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:07.345 are Threshold: 0% 00:16:07.345 Life Percentage Used: 0% 00:16:07.345 Data Units Read: 0 00:16:07.345 Data Units Written: 0 00:16:07.345 Host Read Commands: 0 00:16:07.345 Host Write Commands: 0 00:16:07.345 Controller Busy Time: 0 minutes 00:16:07.345 Power Cycles: 0 00:16:07.345 Power On Hours: 0 hours 00:16:07.345 Unsafe Shutdowns: 0 00:16:07.345 Unrecoverable Media Errors: 0 00:16:07.345 Lifetime Error Log Entries: 0 00:16:07.345 Warning Temperature Time: 0 minutes 00:16:07.345 Critical Temperature Time: 0 minutes 00:16:07.345 00:16:07.345 Number of Queues 00:16:07.345 ================ 00:16:07.345 Number of I/O Submission Queues: 127 00:16:07.345 Number of I/O Completion Queues: 127 00:16:07.345 00:16:07.345 Active Namespaces 00:16:07.345 ================= 00:16:07.345 Namespace ID:1 00:16:07.345 Error Recovery Timeout: Unlimited 00:16:07.345 Command Set Identifier: NVM (00h) 00:16:07.345 Deallocate: Supported 00:16:07.345 Deallocated/Unwritten Error: Not Supported 00:16:07.345 Deallocated Read Value: Unknown 00:16:07.345 Deallocate in Write Zeroes: Not Supported 00:16:07.345 Deallocated Guard Field: 0xFFFF 00:16:07.345 Flush: Supported 00:16:07.345 Reservation: Supported 00:16:07.345 Namespace Sharing Capabilities: Multiple Controllers 00:16:07.345 Size (in LBAs): 131072 (0GiB) 00:16:07.345 Capacity (in LBAs): 131072 (0GiB) 00:16:07.345 Utilization (in LBAs): 131072 (0GiB) 00:16:07.345 NGUID: 2AA3D9755BDE4AD486A5FD65A23AD186 00:16:07.345 UUID: 2aa3d975-5bde-4ad4-86a5-fd65a23ad186 00:16:07.345 Thin Provisioning: Not Supported 00:16:07.345 Per-NS Atomic Units: Yes 00:16:07.345 Atomic Boundary Size (Normal): 0 00:16:07.345 Atomic Boundary Size (PFail): 0 00:16:07.345 Atomic Boundary Offset: 0 00:16:07.345 Maximum Single Source Range Length: 65535 00:16:07.345 Maximum Copy Length: 65535 00:16:07.345 Maximum Source Range Count: 1 00:16:07.345 NGUID/EUI64 Never Reused: No 00:16:07.345 Namespace Write Protected: No 00:16:07.345 Number of LBA Formats: 1 00:16:07.345 Current LBA Format: LBA Format #00 00:16:07.345 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:07.345 00:16:07.345 13:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:07.604 [2024-11-29 13:00:07.320379] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.873 Initializing NVMe Controllers 00:16:12.873 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:12.873 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:12.873 Initialization complete. Launching workers. 00:16:12.873 ======================================================== 00:16:12.873 Latency(us) 00:16:12.873 Device Information : IOPS MiB/s Average min max 00:16:12.873 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39947.58 156.05 3204.56 999.56 10563.63 00:16:12.873 ======================================================== 00:16:12.873 Total : 39947.58 156.05 3204.56 999.56 10563.63 00:16:12.873 00:16:12.873 [2024-11-29 13:00:12.425211] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:12.873 13:00:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:12.873 [2024-11-29 13:00:12.656884] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:18.145 Initializing NVMe Controllers 00:16:18.145 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:18.145 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:18.145 Initialization complete. Launching workers. 00:16:18.145 ======================================================== 00:16:18.145 Latency(us) 00:16:18.145 Device Information : IOPS MiB/s Average min max 00:16:18.145 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39901.98 155.87 3207.89 1004.59 10555.10 00:16:18.145 ======================================================== 00:16:18.145 Total : 39901.98 155.87 3207.89 1004.59 10555.10 00:16:18.145 00:16:18.145 [2024-11-29 13:00:17.674931] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:18.145 13:00:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:18.145 [2024-11-29 13:00:17.889513] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:23.414 [2024-11-29 13:00:23.023037] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:23.414 Initializing NVMe Controllers 00:16:23.414 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:23.414 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:23.414 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:23.414 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:23.414 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:23.414 Initialization complete. Launching workers. 00:16:23.414 Starting thread on core 2 00:16:23.414 Starting thread on core 3 00:16:23.414 Starting thread on core 1 00:16:23.414 13:00:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:23.673 [2024-11-29 13:00:23.324451] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:26.959 [2024-11-29 13:00:26.383437] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:26.959 Initializing NVMe Controllers 00:16:26.959 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.959 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.959 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:26.959 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:26.959 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:26.959 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:26.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:26.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:26.959 Initialization complete. Launching workers. 00:16:26.959 Starting thread on core 1 with urgent priority queue 00:16:26.959 Starting thread on core 2 with urgent priority queue 00:16:26.959 Starting thread on core 3 with urgent priority queue 00:16:26.959 Starting thread on core 0 with urgent priority queue 00:16:26.959 SPDK bdev Controller (SPDK2 ) core 0: 5517.33 IO/s 18.12 secs/100000 ios 00:16:26.960 SPDK bdev Controller (SPDK2 ) core 1: 6015.00 IO/s 16.63 secs/100000 ios 00:16:26.960 SPDK bdev Controller (SPDK2 ) core 2: 4954.33 IO/s 20.18 secs/100000 ios 00:16:26.960 SPDK bdev Controller (SPDK2 ) core 3: 7280.33 IO/s 13.74 secs/100000 ios 00:16:26.960 ======================================================== 00:16:26.960 00:16:26.960 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:26.960 [2024-11-29 13:00:26.671407] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:26.960 Initializing NVMe Controllers 00:16:26.960 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.960 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.960 Namespace ID: 1 size: 0GB 00:16:26.960 Initialization complete. 00:16:26.960 INFO: using host memory buffer for IO 00:16:26.960 Hello world! 00:16:26.960 [2024-11-29 13:00:26.681466] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:26.960 13:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:27.218 [2024-11-29 13:00:26.967865] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:28.594 Initializing NVMe Controllers 00:16:28.594 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:28.594 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:28.594 Initialization complete. Launching workers. 00:16:28.594 submit (in ns) avg, min, max = 6479.7, 3248.7, 4009283.5 00:16:28.594 complete (in ns) avg, min, max = 21694.7, 1779.1, 5994945.2 00:16:28.594 00:16:28.594 Submit histogram 00:16:28.594 ================ 00:16:28.594 Range in us Cumulative Count 00:16:28.594 3.242 - 3.256: 0.0062% ( 1) 00:16:28.594 3.256 - 3.270: 0.0187% ( 2) 00:16:28.594 3.270 - 3.283: 0.0748% ( 9) 00:16:28.594 3.283 - 3.297: 0.1559% ( 13) 00:16:28.594 3.297 - 3.311: 0.3865% ( 37) 00:16:28.594 3.311 - 3.325: 0.8728% ( 78) 00:16:28.594 3.325 - 3.339: 2.3565% ( 238) 00:16:28.594 3.339 - 3.353: 6.6205% ( 684) 00:16:28.594 3.353 - 3.367: 12.2374% ( 901) 00:16:28.594 3.367 - 3.381: 18.0662% ( 935) 00:16:28.594 3.381 - 3.395: 24.1880% ( 982) 00:16:28.594 3.395 - 3.409: 30.3472% ( 988) 00:16:28.594 3.409 - 3.423: 35.1911% ( 777) 00:16:28.594 3.423 - 3.437: 40.3965% ( 835) 00:16:28.594 3.437 - 3.450: 45.7266% ( 855) 00:16:28.594 3.450 - 3.464: 49.8722% ( 665) 00:16:28.594 3.464 - 3.478: 53.4194% ( 569) 00:16:28.594 3.478 - 3.492: 58.5126% ( 817) 00:16:28.594 3.492 - 3.506: 66.3799% ( 1262) 00:16:28.594 3.506 - 3.520: 71.6040% ( 838) 00:16:28.594 3.520 - 3.534: 75.9304% ( 694) 00:16:28.594 3.534 - 3.548: 80.6870% ( 763) 00:16:28.594 3.548 - 3.562: 83.8539% ( 508) 00:16:28.594 3.562 - 3.590: 86.7839% ( 470) 00:16:28.594 3.590 - 3.617: 87.5756% ( 127) 00:16:28.594 3.617 - 3.645: 88.5045% ( 149) 00:16:28.594 3.645 - 3.673: 90.2001% ( 272) 00:16:28.594 3.673 - 3.701: 91.8958% ( 272) 00:16:28.594 3.701 - 3.729: 93.4667% ( 252) 00:16:28.594 3.729 - 3.757: 95.1063% ( 263) 00:16:28.594 3.757 - 3.784: 96.8082% ( 273) 00:16:28.594 3.784 - 3.812: 98.1797% ( 220) 00:16:28.594 3.812 - 3.840: 98.8779% ( 112) 00:16:28.594 3.840 - 3.868: 99.3143% ( 70) 00:16:28.594 3.868 - 3.896: 99.5512% ( 38) 00:16:28.594 3.896 - 3.923: 99.7008% ( 24) 00:16:28.594 3.923 - 3.951: 99.7195% ( 3) 00:16:28.594 3.951 - 3.979: 99.7257% ( 1) 00:16:28.594 4.007 - 4.035: 99.7319% ( 1) 00:16:28.594 5.510 - 5.537: 99.7382% ( 1) 00:16:28.594 5.677 - 5.704: 99.7444% ( 1) 00:16:28.594 5.704 - 5.732: 99.7506% ( 1) 00:16:28.594 5.788 - 5.816: 99.7569% ( 1) 00:16:28.594 5.955 - 5.983: 99.7631% ( 1) 00:16:28.594 6.122 - 6.150: 99.7693% ( 1) 00:16:28.595 6.150 - 6.177: 99.7756% ( 1) 00:16:28.595 6.317 - 6.344: 99.7818% ( 1) 00:16:28.595 6.623 - 6.650: 99.7943% ( 2) 00:16:28.595 6.790 - 6.817: 99.8005% ( 1) 00:16:28.595 6.929 - 6.957: 99.8067% ( 1) 00:16:28.595 6.957 - 6.984: 99.8130% ( 1) 00:16:28.595 7.068 - 7.096: 99.8192% ( 1) 00:16:28.595 7.123 - 7.179: 99.8254% ( 1) 00:16:28.595 7.179 - 7.235: 99.8317% ( 1) 00:16:28.595 7.235 - 7.290: 99.8441% ( 2) 00:16:28.595 7.402 - 7.457: 99.8504% ( 1) 00:16:28.595 7.569 - 7.624: 99.8753% ( 4) 00:16:28.595 8.070 - 8.125: 99.8878% ( 2) 00:16:28.595 8.181 - 8.237: 99.8940% ( 1) 00:16:28.595 8.403 - 8.459: 99.9003% ( 1) 00:16:28.595 8.626 - 8.682: 99.9065% ( 1) 00:16:28.595 8.682 - 8.737: 99.9190% ( 2) 00:16:28.595 9.016 - 9.071: 99.9252% ( 1) 00:16:28.595 3989.148 - 4017.642: 100.0000% ( 12) 00:16:28.595 00:16:28.595 Complete histogram 00:16:28.595 ================== 00:16:28.595 Range in us Cumulative Count 00:16:28.595 1.774 - 1.781: 0.0187% ( 3) 00:16:28.595 1.795 - 1.809: 0.0312% ( 2) 00:16:28.595 1.809 - 1.823: 0.5860% ( 89) 00:16:28.595 1.823 - 1.837: 8.5406% ( 1276) 00:16:28.595 1.837 - 1.850: 15.8594% ( 1174) 00:16:28.595 1.850 - 1.864: 19.4252% ( 572) 00:16:28.595 1.864 - 1.878: 46.5619% ( 4353) 00:16:28.595 1.878 - 1.892: 85.7927% ( 6293) 00:16:28.595 1.892 - 1.906: 94.5702% ( 1408) 00:16:28.595 1.906 - 1.920: 97.2196% ( 425) 00:16:28.595 1.920 - 1.934: 97.8493% ( 101) 00:16:28.595 1.934 - 1.948: 98.3230% ( 76) 00:16:28.595 1.948 - 1.962: 98.7844% ( 74) 00:16:28.595 1.962 - [2024-11-29 13:00:28.059979] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:28.595 1.976: 99.1584% ( 60) 00:16:28.595 1.976 - 1.990: 99.2706% ( 18) 00:16:28.595 1.990 - 2.003: 99.2893% ( 3) 00:16:28.595 2.003 - 2.017: 99.3018% ( 2) 00:16:28.595 2.017 - 2.031: 99.3143% ( 2) 00:16:28.595 2.031 - 2.045: 99.3205% ( 1) 00:16:28.595 2.059 - 2.073: 99.3267% ( 1) 00:16:28.595 3.840 - 3.868: 99.3392% ( 2) 00:16:28.595 3.923 - 3.951: 99.3454% ( 1) 00:16:28.595 3.951 - 3.979: 99.3517% ( 1) 00:16:28.595 4.341 - 4.369: 99.3579% ( 1) 00:16:28.595 4.424 - 4.452: 99.3641% ( 1) 00:16:28.595 4.591 - 4.619: 99.3704% ( 1) 00:16:28.595 4.703 - 4.730: 99.3766% ( 1) 00:16:28.595 4.842 - 4.870: 99.3828% ( 1) 00:16:28.595 4.897 - 4.925: 99.3891% ( 1) 00:16:28.595 4.925 - 4.953: 99.3953% ( 1) 00:16:28.595 5.398 - 5.426: 99.4015% ( 1) 00:16:28.595 5.482 - 5.510: 99.4078% ( 1) 00:16:28.595 5.510 - 5.537: 99.4140% ( 1) 00:16:28.595 5.537 - 5.565: 99.4202% ( 1) 00:16:28.595 5.871 - 5.899: 99.4265% ( 1) 00:16:28.595 5.927 - 5.955: 99.4327% ( 1) 00:16:28.595 6.177 - 6.205: 99.4389% ( 1) 00:16:28.595 6.400 - 6.428: 99.4452% ( 1) 00:16:28.595 6.623 - 6.650: 99.4514% ( 1) 00:16:28.595 6.650 - 6.678: 99.4576% ( 1) 00:16:28.595 6.790 - 6.817: 99.4639% ( 1) 00:16:28.595 7.012 - 7.040: 99.4701% ( 1) 00:16:28.595 7.096 - 7.123: 99.4763% ( 1) 00:16:28.595 7.179 - 7.235: 99.4888% ( 2) 00:16:28.595 7.569 - 7.624: 99.4950% ( 1) 00:16:28.595 40.292 - 40.515: 99.5013% ( 1) 00:16:28.595 1909.092 - 1923.339: 99.5075% ( 1) 00:16:28.595 2008.821 - 2023.068: 99.5200% ( 2) 00:16:28.595 2208.278 - 2222.525: 99.5262% ( 1) 00:16:28.595 3989.148 - 4017.642: 99.9813% ( 73) 00:16:28.595 5983.722 - 6012.216: 100.0000% ( 3) 00:16:28.595 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:28.595 [ 00:16:28.595 { 00:16:28.595 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:28.595 "subtype": "Discovery", 00:16:28.595 "listen_addresses": [], 00:16:28.595 "allow_any_host": true, 00:16:28.595 "hosts": [] 00:16:28.595 }, 00:16:28.595 { 00:16:28.595 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:28.595 "subtype": "NVMe", 00:16:28.595 "listen_addresses": [ 00:16:28.595 { 00:16:28.595 "trtype": "VFIOUSER", 00:16:28.595 "adrfam": "IPv4", 00:16:28.595 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:28.595 "trsvcid": "0" 00:16:28.595 } 00:16:28.595 ], 00:16:28.595 "allow_any_host": true, 00:16:28.595 "hosts": [], 00:16:28.595 "serial_number": "SPDK1", 00:16:28.595 "model_number": "SPDK bdev Controller", 00:16:28.595 "max_namespaces": 32, 00:16:28.595 "min_cntlid": 1, 00:16:28.595 "max_cntlid": 65519, 00:16:28.595 "namespaces": [ 00:16:28.595 { 00:16:28.595 "nsid": 1, 00:16:28.595 "bdev_name": "Malloc1", 00:16:28.595 "name": "Malloc1", 00:16:28.595 "nguid": "CB3D759ED26D4ACE927E9C9E3B8EA203", 00:16:28.595 "uuid": "cb3d759e-d26d-4ace-927e-9c9e3b8ea203" 00:16:28.595 }, 00:16:28.595 { 00:16:28.595 "nsid": 2, 00:16:28.595 "bdev_name": "Malloc3", 00:16:28.595 "name": "Malloc3", 00:16:28.595 "nguid": "4E78B045B704476F8068B3DF709EBE60", 00:16:28.595 "uuid": "4e78b045-b704-476f-8068-b3df709ebe60" 00:16:28.595 } 00:16:28.595 ] 00:16:28.595 }, 00:16:28.595 { 00:16:28.595 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:28.595 "subtype": "NVMe", 00:16:28.595 "listen_addresses": [ 00:16:28.595 { 00:16:28.595 "trtype": "VFIOUSER", 00:16:28.595 "adrfam": "IPv4", 00:16:28.595 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:28.595 "trsvcid": "0" 00:16:28.595 } 00:16:28.595 ], 00:16:28.595 "allow_any_host": true, 00:16:28.595 "hosts": [], 00:16:28.595 "serial_number": "SPDK2", 00:16:28.595 "model_number": "SPDK bdev Controller", 00:16:28.595 "max_namespaces": 32, 00:16:28.595 "min_cntlid": 1, 00:16:28.595 "max_cntlid": 65519, 00:16:28.595 "namespaces": [ 00:16:28.595 { 00:16:28.595 "nsid": 1, 00:16:28.595 "bdev_name": "Malloc2", 00:16:28.595 "name": "Malloc2", 00:16:28.595 "nguid": "2AA3D9755BDE4AD486A5FD65A23AD186", 00:16:28.595 "uuid": "2aa3d975-5bde-4ad4-86a5-fd65a23ad186" 00:16:28.595 } 00:16:28.595 ] 00:16:28.595 } 00:16:28.595 ] 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1964619 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:28.595 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:28.886 [2024-11-29 13:00:28.476349] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:28.886 Malloc4 00:16:28.886 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:29.166 [2024-11-29 13:00:28.719163] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:29.166 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:29.166 Asynchronous Event Request test 00:16:29.166 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:29.166 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:29.166 Registering asynchronous event callbacks... 00:16:29.166 Starting namespace attribute notice tests for all controllers... 00:16:29.166 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:29.166 aer_cb - Changed Namespace 00:16:29.166 Cleaning up... 00:16:29.166 [ 00:16:29.166 { 00:16:29.166 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:29.166 "subtype": "Discovery", 00:16:29.166 "listen_addresses": [], 00:16:29.166 "allow_any_host": true, 00:16:29.166 "hosts": [] 00:16:29.166 }, 00:16:29.166 { 00:16:29.166 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:29.166 "subtype": "NVMe", 00:16:29.166 "listen_addresses": [ 00:16:29.166 { 00:16:29.166 "trtype": "VFIOUSER", 00:16:29.166 "adrfam": "IPv4", 00:16:29.166 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:29.166 "trsvcid": "0" 00:16:29.166 } 00:16:29.166 ], 00:16:29.166 "allow_any_host": true, 00:16:29.166 "hosts": [], 00:16:29.167 "serial_number": "SPDK1", 00:16:29.167 "model_number": "SPDK bdev Controller", 00:16:29.167 "max_namespaces": 32, 00:16:29.167 "min_cntlid": 1, 00:16:29.167 "max_cntlid": 65519, 00:16:29.167 "namespaces": [ 00:16:29.167 { 00:16:29.167 "nsid": 1, 00:16:29.167 "bdev_name": "Malloc1", 00:16:29.167 "name": "Malloc1", 00:16:29.167 "nguid": "CB3D759ED26D4ACE927E9C9E3B8EA203", 00:16:29.167 "uuid": "cb3d759e-d26d-4ace-927e-9c9e3b8ea203" 00:16:29.167 }, 00:16:29.167 { 00:16:29.167 "nsid": 2, 00:16:29.167 "bdev_name": "Malloc3", 00:16:29.167 "name": "Malloc3", 00:16:29.167 "nguid": "4E78B045B704476F8068B3DF709EBE60", 00:16:29.167 "uuid": "4e78b045-b704-476f-8068-b3df709ebe60" 00:16:29.167 } 00:16:29.167 ] 00:16:29.167 }, 00:16:29.167 { 00:16:29.167 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:29.167 "subtype": "NVMe", 00:16:29.167 "listen_addresses": [ 00:16:29.167 { 00:16:29.167 "trtype": "VFIOUSER", 00:16:29.167 "adrfam": "IPv4", 00:16:29.167 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:29.167 "trsvcid": "0" 00:16:29.167 } 00:16:29.167 ], 00:16:29.167 "allow_any_host": true, 00:16:29.167 "hosts": [], 00:16:29.167 "serial_number": "SPDK2", 00:16:29.167 "model_number": "SPDK bdev Controller", 00:16:29.167 "max_namespaces": 32, 00:16:29.167 "min_cntlid": 1, 00:16:29.167 "max_cntlid": 65519, 00:16:29.167 "namespaces": [ 00:16:29.167 { 00:16:29.167 "nsid": 1, 00:16:29.167 "bdev_name": "Malloc2", 00:16:29.167 "name": "Malloc2", 00:16:29.167 "nguid": "2AA3D9755BDE4AD486A5FD65A23AD186", 00:16:29.167 "uuid": "2aa3d975-5bde-4ad4-86a5-fd65a23ad186" 00:16:29.167 }, 00:16:29.167 { 00:16:29.167 "nsid": 2, 00:16:29.167 "bdev_name": "Malloc4", 00:16:29.167 "name": "Malloc4", 00:16:29.167 "nguid": "35CC7080534A4E45AC20202E7A31CF0C", 00:16:29.167 "uuid": "35cc7080-534a-4e45-ac20-202e7a31cf0c" 00:16:29.167 } 00:16:29.167 ] 00:16:29.167 } 00:16:29.167 ] 00:16:29.167 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1964619 00:16:29.167 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:29.167 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1956465 00:16:29.167 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1956465 ']' 00:16:29.167 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1956465 00:16:29.167 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:29.167 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.167 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1956465 00:16:29.463 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.463 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.463 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1956465' 00:16:29.463 killing process with pid 1956465 00:16:29.463 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1956465 00:16:29.463 13:00:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1956465 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1964859 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1964859' 00:16:29.463 Process pid: 1964859 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1964859 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1964859 ']' 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.463 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:29.760 [2024-11-29 13:00:29.285479] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:29.760 [2024-11-29 13:00:29.286404] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:16:29.760 [2024-11-29 13:00:29.286444] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.760 [2024-11-29 13:00:29.346254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.760 [2024-11-29 13:00:29.388879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.760 [2024-11-29 13:00:29.388917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.760 [2024-11-29 13:00:29.388925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.760 [2024-11-29 13:00:29.388931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.760 [2024-11-29 13:00:29.388936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.760 [2024-11-29 13:00:29.393966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.760 [2024-11-29 13:00:29.393984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.760 [2024-11-29 13:00:29.394069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.760 [2024-11-29 13:00:29.394073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.760 [2024-11-29 13:00:29.463013] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:29.760 [2024-11-29 13:00:29.463129] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:29.760 [2024-11-29 13:00:29.463253] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:29.760 [2024-11-29 13:00:29.463465] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:29.760 [2024-11-29 13:00:29.463645] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:29.760 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.760 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:29.760 13:00:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:30.720 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:30.979 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:30.979 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:30.979 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:30.979 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:30.979 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:31.238 Malloc1 00:16:31.238 13:00:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:31.495 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:31.753 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:31.753 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:31.753 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:31.753 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:32.011 Malloc2 00:16:32.011 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:32.271 13:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:32.529 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:32.529 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:32.529 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1964859 00:16:32.529 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1964859 ']' 00:16:32.529 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1964859 00:16:32.529 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:32.529 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.529 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1964859 00:16:32.788 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.788 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.788 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1964859' 00:16:32.788 killing process with pid 1964859 00:16:32.788 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1964859 00:16:32.788 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1964859 00:16:32.788 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:32.788 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:32.788 00:16:32.788 real 0m50.854s 00:16:32.788 user 3m16.859s 00:16:32.788 sys 0m3.166s 00:16:32.788 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.788 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:32.788 ************************************ 00:16:32.788 END TEST nvmf_vfio_user 00:16:32.788 ************************************ 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:33.049 ************************************ 00:16:33.049 START TEST nvmf_vfio_user_nvme_compliance 00:16:33.049 ************************************ 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:33.049 * Looking for test storage... 00:16:33.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:33.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.049 --rc genhtml_branch_coverage=1 00:16:33.049 --rc genhtml_function_coverage=1 00:16:33.049 --rc genhtml_legend=1 00:16:33.049 --rc geninfo_all_blocks=1 00:16:33.049 --rc geninfo_unexecuted_blocks=1 00:16:33.049 00:16:33.049 ' 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:33.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.049 --rc genhtml_branch_coverage=1 00:16:33.049 --rc genhtml_function_coverage=1 00:16:33.049 --rc genhtml_legend=1 00:16:33.049 --rc geninfo_all_blocks=1 00:16:33.049 --rc geninfo_unexecuted_blocks=1 00:16:33.049 00:16:33.049 ' 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:33.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.049 --rc genhtml_branch_coverage=1 00:16:33.049 --rc genhtml_function_coverage=1 00:16:33.049 --rc genhtml_legend=1 00:16:33.049 --rc geninfo_all_blocks=1 00:16:33.049 --rc geninfo_unexecuted_blocks=1 00:16:33.049 00:16:33.049 ' 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:33.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.049 --rc genhtml_branch_coverage=1 00:16:33.049 --rc genhtml_function_coverage=1 00:16:33.049 --rc genhtml_legend=1 00:16:33.049 --rc geninfo_all_blocks=1 00:16:33.049 --rc geninfo_unexecuted_blocks=1 00:16:33.049 00:16:33.049 ' 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:33.049 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:33.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1965582 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1965582' 00:16:33.050 Process pid: 1965582 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1965582 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1965582 ']' 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.050 13:00:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:33.310 [2024-11-29 13:00:32.905958] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:16:33.310 [2024-11-29 13:00:32.906007] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.310 [2024-11-29 13:00:32.967534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:33.310 [2024-11-29 13:00:33.009785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.310 [2024-11-29 13:00:33.009823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.310 [2024-11-29 13:00:33.009831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.310 [2024-11-29 13:00:33.009837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.310 [2024-11-29 13:00:33.009842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.310 [2024-11-29 13:00:33.011259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.310 [2024-11-29 13:00:33.011355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.310 [2024-11-29 13:00:33.011357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.310 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.310 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:33.310 13:00:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:34.688 malloc0 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.688 13:00:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:34.688 00:16:34.688 00:16:34.688 CUnit - A unit testing framework for C - Version 2.1-3 00:16:34.688 http://cunit.sourceforge.net/ 00:16:34.688 00:16:34.688 00:16:34.688 Suite: nvme_compliance 00:16:34.688 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-29 13:00:34.337390] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.688 [2024-11-29 13:00:34.338754] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:34.688 [2024-11-29 13:00:34.338773] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:34.688 [2024-11-29 13:00:34.338783] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:34.688 [2024-11-29 13:00:34.340413] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.688 passed 00:16:34.688 Test: admin_identify_ctrlr_verify_fused ...[2024-11-29 13:00:34.419985] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.688 [2024-11-29 13:00:34.423001] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.688 passed 00:16:34.688 Test: admin_identify_ns ...[2024-11-29 13:00:34.502958] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.947 [2024-11-29 13:00:34.563959] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:34.947 [2024-11-29 13:00:34.571958] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:34.947 [2024-11-29 13:00:34.593067] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.947 passed 00:16:34.947 Test: admin_get_features_mandatory_features ...[2024-11-29 13:00:34.670117] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.947 [2024-11-29 13:00:34.673134] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.947 passed 00:16:34.947 Test: admin_get_features_optional_features ...[2024-11-29 13:00:34.752640] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.947 [2024-11-29 13:00:34.755660] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.206 passed 00:16:35.206 Test: admin_set_features_number_of_queues ...[2024-11-29 13:00:34.832425] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.206 [2024-11-29 13:00:34.937051] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.206 passed 00:16:35.206 Test: admin_get_log_page_mandatory_logs ...[2024-11-29 13:00:35.014930] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.206 [2024-11-29 13:00:35.020982] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.465 passed 00:16:35.465 Test: admin_get_log_page_with_lpo ...[2024-11-29 13:00:35.096492] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.465 [2024-11-29 13:00:35.164960] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:35.465 [2024-11-29 13:00:35.178012] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.465 passed 00:16:35.465 Test: fabric_property_get ...[2024-11-29 13:00:35.255127] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.465 [2024-11-29 13:00:35.256389] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:35.465 [2024-11-29 13:00:35.258151] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.724 passed 00:16:35.724 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-29 13:00:35.336681] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.724 [2024-11-29 13:00:35.337921] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:35.724 [2024-11-29 13:00:35.339697] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.724 passed 00:16:35.724 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-29 13:00:35.416438] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.724 [2024-11-29 13:00:35.499956] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:35.724 [2024-11-29 13:00:35.515964] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:35.724 [2024-11-29 13:00:35.521036] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.984 passed 00:16:35.984 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-29 13:00:35.597958] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.984 [2024-11-29 13:00:35.600173] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:35.984 [2024-11-29 13:00:35.601992] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:35.984 passed 00:16:35.984 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-29 13:00:35.679972] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.984 [2024-11-29 13:00:35.758139] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:35.984 [2024-11-29 13:00:35.781960] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:35.984 [2024-11-29 13:00:35.787039] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.242 passed 00:16:36.242 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-29 13:00:35.862197] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.242 [2024-11-29 13:00:35.863443] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:36.242 [2024-11-29 13:00:35.863468] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:36.242 [2024-11-29 13:00:35.867232] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.242 passed 00:16:36.242 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-29 13:00:35.942444] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.242 [2024-11-29 13:00:36.037956] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:36.242 [2024-11-29 13:00:36.045959] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:36.242 [2024-11-29 13:00:36.053955] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:36.242 [2024-11-29 13:00:36.061964] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:36.501 [2024-11-29 13:00:36.091052] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.501 passed 00:16:36.501 Test: admin_create_io_sq_verify_pc ...[2024-11-29 13:00:36.166180] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.501 [2024-11-29 13:00:36.184961] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:36.501 [2024-11-29 13:00:36.202369] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.501 passed 00:16:36.501 Test: admin_create_io_qp_max_qps ...[2024-11-29 13:00:36.274878] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:37.880 [2024-11-29 13:00:37.358956] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:38.139 [2024-11-29 13:00:37.734209] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.139 passed 00:16:38.139 Test: admin_create_io_sq_shared_cq ...[2024-11-29 13:00:37.815404] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:38.139 [2024-11-29 13:00:37.947955] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:38.399 [2024-11-29 13:00:37.985013] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:38.399 passed 00:16:38.399 00:16:38.399 Run Summary: Type Total Ran Passed Failed Inactive 00:16:38.399 suites 1 1 n/a 0 0 00:16:38.399 tests 18 18 18 0 0 00:16:38.399 asserts 360 360 360 0 n/a 00:16:38.399 00:16:38.399 Elapsed time = 1.499 seconds 00:16:38.399 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1965582 00:16:38.399 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1965582 ']' 00:16:38.399 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1965582 00:16:38.399 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:38.399 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.399 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1965582 00:16:38.399 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.399 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.399 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1965582' 00:16:38.400 killing process with pid 1965582 00:16:38.400 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1965582 00:16:38.400 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1965582 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:38.659 00:16:38.659 real 0m5.614s 00:16:38.659 user 0m15.706s 00:16:38.659 sys 0m0.500s 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:38.659 ************************************ 00:16:38.659 END TEST nvmf_vfio_user_nvme_compliance 00:16:38.659 ************************************ 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.659 ************************************ 00:16:38.659 START TEST nvmf_vfio_user_fuzz 00:16:38.659 ************************************ 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:38.659 * Looking for test storage... 00:16:38.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:38.659 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.660 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:38.660 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:38.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.920 --rc genhtml_branch_coverage=1 00:16:38.920 --rc genhtml_function_coverage=1 00:16:38.920 --rc genhtml_legend=1 00:16:38.920 --rc geninfo_all_blocks=1 00:16:38.920 --rc geninfo_unexecuted_blocks=1 00:16:38.920 00:16:38.920 ' 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:38.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.920 --rc genhtml_branch_coverage=1 00:16:38.920 --rc genhtml_function_coverage=1 00:16:38.920 --rc genhtml_legend=1 00:16:38.920 --rc geninfo_all_blocks=1 00:16:38.920 --rc geninfo_unexecuted_blocks=1 00:16:38.920 00:16:38.920 ' 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:38.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.920 --rc genhtml_branch_coverage=1 00:16:38.920 --rc genhtml_function_coverage=1 00:16:38.920 --rc genhtml_legend=1 00:16:38.920 --rc geninfo_all_blocks=1 00:16:38.920 --rc geninfo_unexecuted_blocks=1 00:16:38.920 00:16:38.920 ' 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:38.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.920 --rc genhtml_branch_coverage=1 00:16:38.920 --rc genhtml_function_coverage=1 00:16:38.920 --rc genhtml_legend=1 00:16:38.920 --rc geninfo_all_blocks=1 00:16:38.920 --rc geninfo_unexecuted_blocks=1 00:16:38.920 00:16:38.920 ' 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.920 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1966600 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1966600' 00:16:38.921 Process pid: 1966600 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1966600 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1966600 ']' 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.921 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:39.180 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.180 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:39.180 13:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:40.119 malloc0 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:40.119 13:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:12.209 Fuzzing completed. Shutting down the fuzz application 00:17:12.209 00:17:12.209 Dumping successful admin opcodes: 00:17:12.209 9, 10, 00:17:12.209 Dumping successful io opcodes: 00:17:12.209 0, 00:17:12.209 NS: 0x20000081ef00 I/O qp, Total commands completed: 1109197, total successful commands: 4368, random_seed: 2093638528 00:17:12.209 NS: 0x20000081ef00 admin qp, Total commands completed: 276000, total successful commands: 64, random_seed: 2102122496 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1966600 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1966600 ']' 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1966600 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1966600 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1966600' 00:17:12.209 killing process with pid 1966600 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1966600 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1966600 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:12.209 00:17:12.209 real 0m32.183s 00:17:12.209 user 0m33.882s 00:17:12.209 sys 0m26.850s 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:12.209 ************************************ 00:17:12.209 END TEST nvmf_vfio_user_fuzz 00:17:12.209 ************************************ 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.209 ************************************ 00:17:12.209 START TEST nvmf_auth_target 00:17:12.209 ************************************ 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:12.209 * Looking for test storage... 00:17:12.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.209 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:12.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.210 --rc genhtml_branch_coverage=1 00:17:12.210 --rc genhtml_function_coverage=1 00:17:12.210 --rc genhtml_legend=1 00:17:12.210 --rc geninfo_all_blocks=1 00:17:12.210 --rc geninfo_unexecuted_blocks=1 00:17:12.210 00:17:12.210 ' 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:12.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.210 --rc genhtml_branch_coverage=1 00:17:12.210 --rc genhtml_function_coverage=1 00:17:12.210 --rc genhtml_legend=1 00:17:12.210 --rc geninfo_all_blocks=1 00:17:12.210 --rc geninfo_unexecuted_blocks=1 00:17:12.210 00:17:12.210 ' 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:12.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.210 --rc genhtml_branch_coverage=1 00:17:12.210 --rc genhtml_function_coverage=1 00:17:12.210 --rc genhtml_legend=1 00:17:12.210 --rc geninfo_all_blocks=1 00:17:12.210 --rc geninfo_unexecuted_blocks=1 00:17:12.210 00:17:12.210 ' 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:12.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.210 --rc genhtml_branch_coverage=1 00:17:12.210 --rc genhtml_function_coverage=1 00:17:12.210 --rc genhtml_legend=1 00:17:12.210 --rc geninfo_all_blocks=1 00:17:12.210 --rc geninfo_unexecuted_blocks=1 00:17:12.210 00:17:12.210 ' 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:12.210 13:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:16.405 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:16.405 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:16.405 Found net devices under 0000:86:00.0: cvl_0_0 00:17:16.405 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:16.406 Found net devices under 0000:86:00.1: cvl_0_1 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:16.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:17:16.406 00:17:16.406 --- 10.0.0.2 ping statistics --- 00:17:16.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.406 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:17:16.406 00:17:16.406 --- 10.0.0.1 ping statistics --- 00:17:16.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.406 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1974686 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1974686 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1974686 ']' 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.406 13:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.406 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.406 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:16.406 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:16.406 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:16.406 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1974755 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b5c5cfe138b3888bec0bb5c07e53f08babb4afe65471d795 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.UU4 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b5c5cfe138b3888bec0bb5c07e53f08babb4afe65471d795 0 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b5c5cfe138b3888bec0bb5c07e53f08babb4afe65471d795 0 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b5c5cfe138b3888bec0bb5c07e53f08babb4afe65471d795 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.UU4 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.UU4 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.UU4 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=72589299af0cfa2050636a50622d5d546cb4f199c46ee7273b3d73ed5509c621 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.F6m 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 72589299af0cfa2050636a50622d5d546cb4f199c46ee7273b3d73ed5509c621 3 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 72589299af0cfa2050636a50622d5d546cb4f199c46ee7273b3d73ed5509c621 3 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=72589299af0cfa2050636a50622d5d546cb4f199c46ee7273b3d73ed5509c621 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.F6m 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.F6m 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.F6m 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ce80918ddb20cb0c21d8f360a1302d7b 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.TLX 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ce80918ddb20cb0c21d8f360a1302d7b 1 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ce80918ddb20cb0c21d8f360a1302d7b 1 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ce80918ddb20cb0c21d8f360a1302d7b 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.TLX 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.TLX 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.TLX 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f3c59fd20194facb6d1996412cf094c3396b95472bfae060 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.RhJ 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f3c59fd20194facb6d1996412cf094c3396b95472bfae060 2 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f3c59fd20194facb6d1996412cf094c3396b95472bfae060 2 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f3c59fd20194facb6d1996412cf094c3396b95472bfae060 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.RhJ 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.RhJ 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.RhJ 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=09cc6243294028ff7faf414d9c63b72c165ede9bffeff273 00:17:16.666 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0Qk 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 09cc6243294028ff7faf414d9c63b72c165ede9bffeff273 2 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 09cc6243294028ff7faf414d9c63b72c165ede9bffeff273 2 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=09cc6243294028ff7faf414d9c63b72c165ede9bffeff273 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0Qk 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0Qk 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.0Qk 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0b6b9147e01b5f78bcc6939a23c002a0 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.XEK 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0b6b9147e01b5f78bcc6939a23c002a0 1 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0b6b9147e01b5f78bcc6939a23c002a0 1 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0b6b9147e01b5f78bcc6939a23c002a0 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:16.925 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.XEK 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.XEK 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.XEK 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=317631c1f0e12f68f12acdb94ed874d11a96540c9a5741edc7bb7474ccba2042 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.zgI 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 317631c1f0e12f68f12acdb94ed874d11a96540c9a5741edc7bb7474ccba2042 3 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 317631c1f0e12f68f12acdb94ed874d11a96540c9a5741edc7bb7474ccba2042 3 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=317631c1f0e12f68f12acdb94ed874d11a96540c9a5741edc7bb7474ccba2042 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.zgI 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.zgI 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.zgI 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1974686 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1974686 ']' 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.926 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.184 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.184 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:17.184 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1974755 /var/tmp/host.sock 00:17:17.184 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1974755 ']' 00:17:17.184 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:17.184 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.184 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:17.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:17.184 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.184 13:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.444 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.444 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:17.444 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:17.444 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.444 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.444 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.444 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:17.444 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UU4 00:17:17.444 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.444 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.444 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.444 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.UU4 00:17:17.444 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.UU4 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.F6m ]] 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.F6m 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.F6m 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.F6m 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.TLX 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.TLX 00:17:17.703 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.TLX 00:17:17.962 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.RhJ ]] 00:17:17.962 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RhJ 00:17:17.962 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.962 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.962 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.962 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RhJ 00:17:17.963 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RhJ 00:17:18.222 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:18.222 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0Qk 00:17:18.222 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.222 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.222 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.222 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.0Qk 00:17:18.222 13:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.0Qk 00:17:18.482 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.XEK ]] 00:17:18.482 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XEK 00:17:18.482 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.482 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.482 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.482 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XEK 00:17:18.482 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XEK 00:17:18.742 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:18.742 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.zgI 00:17:18.742 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.742 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.742 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.742 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.zgI 00:17:18.742 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.zgI 00:17:18.742 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:18.742 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:18.742 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.742 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.742 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:18.742 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.001 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:19.001 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.001 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.001 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:19.001 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:19.001 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.001 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.001 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.001 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.001 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.001 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.001 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.001 13:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.260 00:17:19.260 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.260 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.260 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.520 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.520 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.520 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.520 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.520 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.520 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.520 { 00:17:19.520 "cntlid": 1, 00:17:19.520 "qid": 0, 00:17:19.520 "state": "enabled", 00:17:19.520 "thread": "nvmf_tgt_poll_group_000", 00:17:19.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:19.520 "listen_address": { 00:17:19.520 "trtype": "TCP", 00:17:19.520 "adrfam": "IPv4", 00:17:19.520 "traddr": "10.0.0.2", 00:17:19.520 "trsvcid": "4420" 00:17:19.520 }, 00:17:19.520 "peer_address": { 00:17:19.520 "trtype": "TCP", 00:17:19.520 "adrfam": "IPv4", 00:17:19.520 "traddr": "10.0.0.1", 00:17:19.520 "trsvcid": "38050" 00:17:19.520 }, 00:17:19.520 "auth": { 00:17:19.520 "state": "completed", 00:17:19.520 "digest": "sha256", 00:17:19.520 "dhgroup": "null" 00:17:19.520 } 00:17:19.520 } 00:17:19.520 ]' 00:17:19.520 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.520 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.520 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.520 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:19.520 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.780 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.780 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.780 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.780 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:19.780 13:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:20.349 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.349 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.349 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.349 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.349 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.349 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.349 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:20.349 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:20.608 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:20.608 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.608 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:20.608 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:20.608 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.609 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.609 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.609 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.609 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.609 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.609 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.609 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.609 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.868 00:17:20.868 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.868 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.868 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.127 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.127 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.127 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.127 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.127 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.127 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.127 { 00:17:21.127 "cntlid": 3, 00:17:21.127 "qid": 0, 00:17:21.127 "state": "enabled", 00:17:21.127 "thread": "nvmf_tgt_poll_group_000", 00:17:21.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:21.127 "listen_address": { 00:17:21.127 "trtype": "TCP", 00:17:21.127 "adrfam": "IPv4", 00:17:21.127 "traddr": "10.0.0.2", 00:17:21.127 "trsvcid": "4420" 00:17:21.127 }, 00:17:21.127 "peer_address": { 00:17:21.127 "trtype": "TCP", 00:17:21.127 "adrfam": "IPv4", 00:17:21.127 "traddr": "10.0.0.1", 00:17:21.127 "trsvcid": "38074" 00:17:21.127 }, 00:17:21.127 "auth": { 00:17:21.127 "state": "completed", 00:17:21.127 "digest": "sha256", 00:17:21.127 "dhgroup": "null" 00:17:21.127 } 00:17:21.127 } 00:17:21.127 ]' 00:17:21.127 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.127 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.127 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.127 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:21.127 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.386 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.386 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.386 13:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.386 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:17:21.386 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:17:21.954 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.954 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.954 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.954 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.954 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.954 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.954 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.954 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.214 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:22.214 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.214 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.214 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:22.214 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:22.214 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.214 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.214 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.214 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.214 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.214 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.214 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.214 13:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.473 00:17:22.473 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.473 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.473 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.732 { 00:17:22.732 "cntlid": 5, 00:17:22.732 "qid": 0, 00:17:22.732 "state": "enabled", 00:17:22.732 "thread": "nvmf_tgt_poll_group_000", 00:17:22.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:22.732 "listen_address": { 00:17:22.732 "trtype": "TCP", 00:17:22.732 "adrfam": "IPv4", 00:17:22.732 "traddr": "10.0.0.2", 00:17:22.732 "trsvcid": "4420" 00:17:22.732 }, 00:17:22.732 "peer_address": { 00:17:22.732 "trtype": "TCP", 00:17:22.732 "adrfam": "IPv4", 00:17:22.732 "traddr": "10.0.0.1", 00:17:22.732 "trsvcid": "38114" 00:17:22.732 }, 00:17:22.732 "auth": { 00:17:22.732 "state": "completed", 00:17:22.732 "digest": "sha256", 00:17:22.732 "dhgroup": "null" 00:17:22.732 } 00:17:22.732 } 00:17:22.732 ]' 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.732 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.992 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:17:22.992 13:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:17:23.560 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.560 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.560 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.560 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.560 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.560 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.560 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:23.560 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:23.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:23.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:23.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:23.819 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.820 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.820 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:23.820 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.820 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.820 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.820 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.820 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.820 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.078 00:17:24.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.078 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.337 13:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.337 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.337 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.337 { 00:17:24.337 "cntlid": 7, 00:17:24.337 "qid": 0, 00:17:24.337 "state": "enabled", 00:17:24.337 "thread": "nvmf_tgt_poll_group_000", 00:17:24.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:24.337 "listen_address": { 00:17:24.337 "trtype": "TCP", 00:17:24.337 "adrfam": "IPv4", 00:17:24.337 "traddr": "10.0.0.2", 00:17:24.337 "trsvcid": "4420" 00:17:24.337 }, 00:17:24.337 "peer_address": { 00:17:24.337 "trtype": "TCP", 00:17:24.337 "adrfam": "IPv4", 00:17:24.337 "traddr": "10.0.0.1", 00:17:24.337 "trsvcid": "41140" 00:17:24.337 }, 00:17:24.337 "auth": { 00:17:24.337 "state": "completed", 00:17:24.337 "digest": "sha256", 00:17:24.337 "dhgroup": "null" 00:17:24.337 } 00:17:24.337 } 00:17:24.337 ]' 00:17:24.337 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.337 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.337 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.337 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:24.337 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.338 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.338 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.338 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.596 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:17:24.596 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:17:25.164 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.164 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.164 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.164 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.164 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.164 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.164 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.164 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.164 13:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.428 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:25.428 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.428 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.428 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:25.428 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.428 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.428 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.428 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.428 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.428 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.428 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.428 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.428 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.686 00:17:25.686 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.686 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.686 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.945 { 00:17:25.945 "cntlid": 9, 00:17:25.945 "qid": 0, 00:17:25.945 "state": "enabled", 00:17:25.945 "thread": "nvmf_tgt_poll_group_000", 00:17:25.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:25.945 "listen_address": { 00:17:25.945 "trtype": "TCP", 00:17:25.945 "adrfam": "IPv4", 00:17:25.945 "traddr": "10.0.0.2", 00:17:25.945 "trsvcid": "4420" 00:17:25.945 }, 00:17:25.945 "peer_address": { 00:17:25.945 "trtype": "TCP", 00:17:25.945 "adrfam": "IPv4", 00:17:25.945 "traddr": "10.0.0.1", 00:17:25.945 "trsvcid": "41176" 00:17:25.945 }, 00:17:25.945 "auth": { 00:17:25.945 "state": "completed", 00:17:25.945 "digest": "sha256", 00:17:25.945 "dhgroup": "ffdhe2048" 00:17:25.945 } 00:17:25.945 } 00:17:25.945 ]' 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.945 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.204 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:26.204 13:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:26.772 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.772 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.772 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.772 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.772 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.772 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.772 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.772 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.031 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:27.031 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.031 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:27.031 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:27.031 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.031 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.031 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.031 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.031 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.031 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.031 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.031 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.031 13:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.290 00:17:27.290 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.290 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.290 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.549 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.549 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.549 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.549 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.549 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.549 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.549 { 00:17:27.549 "cntlid": 11, 00:17:27.549 "qid": 0, 00:17:27.549 "state": "enabled", 00:17:27.549 "thread": "nvmf_tgt_poll_group_000", 00:17:27.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:27.549 "listen_address": { 00:17:27.549 "trtype": "TCP", 00:17:27.549 "adrfam": "IPv4", 00:17:27.549 "traddr": "10.0.0.2", 00:17:27.549 "trsvcid": "4420" 00:17:27.549 }, 00:17:27.549 "peer_address": { 00:17:27.549 "trtype": "TCP", 00:17:27.549 "adrfam": "IPv4", 00:17:27.549 "traddr": "10.0.0.1", 00:17:27.549 "trsvcid": "41200" 00:17:27.549 }, 00:17:27.549 "auth": { 00:17:27.549 "state": "completed", 00:17:27.549 "digest": "sha256", 00:17:27.549 "dhgroup": "ffdhe2048" 00:17:27.549 } 00:17:27.549 } 00:17:27.549 ]' 00:17:27.549 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.549 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.549 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.549 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:27.550 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.550 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.550 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.550 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.809 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:17:27.809 13:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:17:28.377 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.377 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.377 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.377 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.377 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.377 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.377 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:28.377 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:28.636 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:28.636 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.636 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:28.636 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:28.636 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.636 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.636 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.636 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.636 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.636 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.636 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.636 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.636 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.895 00:17:28.895 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.895 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.895 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.154 { 00:17:29.154 "cntlid": 13, 00:17:29.154 "qid": 0, 00:17:29.154 "state": "enabled", 00:17:29.154 "thread": "nvmf_tgt_poll_group_000", 00:17:29.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:29.154 "listen_address": { 00:17:29.154 "trtype": "TCP", 00:17:29.154 "adrfam": "IPv4", 00:17:29.154 "traddr": "10.0.0.2", 00:17:29.154 "trsvcid": "4420" 00:17:29.154 }, 00:17:29.154 "peer_address": { 00:17:29.154 "trtype": "TCP", 00:17:29.154 "adrfam": "IPv4", 00:17:29.154 "traddr": "10.0.0.1", 00:17:29.154 "trsvcid": "41230" 00:17:29.154 }, 00:17:29.154 "auth": { 00:17:29.154 "state": "completed", 00:17:29.154 "digest": "sha256", 00:17:29.154 "dhgroup": "ffdhe2048" 00:17:29.154 } 00:17:29.154 } 00:17:29.154 ]' 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.154 13:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.414 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:17:29.414 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:17:29.984 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.984 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.984 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.984 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.984 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.984 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.984 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.984 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:30.243 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:30.243 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.243 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:30.243 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:30.243 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.243 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.243 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:30.243 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.243 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.243 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.243 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.243 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.243 13:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.502 00:17:30.502 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.502 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.502 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.762 { 00:17:30.762 "cntlid": 15, 00:17:30.762 "qid": 0, 00:17:30.762 "state": "enabled", 00:17:30.762 "thread": "nvmf_tgt_poll_group_000", 00:17:30.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:30.762 "listen_address": { 00:17:30.762 "trtype": "TCP", 00:17:30.762 "adrfam": "IPv4", 00:17:30.762 "traddr": "10.0.0.2", 00:17:30.762 "trsvcid": "4420" 00:17:30.762 }, 00:17:30.762 "peer_address": { 00:17:30.762 "trtype": "TCP", 00:17:30.762 "adrfam": "IPv4", 00:17:30.762 "traddr": "10.0.0.1", 00:17:30.762 "trsvcid": "41264" 00:17:30.762 }, 00:17:30.762 "auth": { 00:17:30.762 "state": "completed", 00:17:30.762 "digest": "sha256", 00:17:30.762 "dhgroup": "ffdhe2048" 00:17:30.762 } 00:17:30.762 } 00:17:30.762 ]' 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.762 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.022 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:17:31.022 13:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:17:31.591 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.591 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.591 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.591 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.591 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.591 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.591 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.591 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.591 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.850 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:31.850 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.850 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:31.850 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:31.850 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.850 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.850 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.850 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.850 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.850 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.850 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.850 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.850 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.109 00:17:32.109 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.109 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.109 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.369 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.369 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.369 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.369 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.369 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.369 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.369 { 00:17:32.369 "cntlid": 17, 00:17:32.369 "qid": 0, 00:17:32.369 "state": "enabled", 00:17:32.369 "thread": "nvmf_tgt_poll_group_000", 00:17:32.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:32.369 "listen_address": { 00:17:32.369 "trtype": "TCP", 00:17:32.369 "adrfam": "IPv4", 00:17:32.369 "traddr": "10.0.0.2", 00:17:32.369 "trsvcid": "4420" 00:17:32.369 }, 00:17:32.369 "peer_address": { 00:17:32.369 "trtype": "TCP", 00:17:32.369 "adrfam": "IPv4", 00:17:32.369 "traddr": "10.0.0.1", 00:17:32.369 "trsvcid": "41294" 00:17:32.369 }, 00:17:32.369 "auth": { 00:17:32.369 "state": "completed", 00:17:32.369 "digest": "sha256", 00:17:32.369 "dhgroup": "ffdhe3072" 00:17:32.369 } 00:17:32.369 } 00:17:32.369 ]' 00:17:32.369 13:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.369 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.369 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.369 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.369 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.369 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.369 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.369 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.629 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:32.629 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:33.197 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.197 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.197 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.197 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.197 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.197 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.197 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:33.197 13:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:33.456 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:33.456 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.456 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:33.456 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:33.456 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.456 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.456 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.456 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.456 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.456 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.456 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.456 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.456 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.716 00:17:33.716 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.716 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.716 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.975 { 00:17:33.975 "cntlid": 19, 00:17:33.975 "qid": 0, 00:17:33.975 "state": "enabled", 00:17:33.975 "thread": "nvmf_tgt_poll_group_000", 00:17:33.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:33.975 "listen_address": { 00:17:33.975 "trtype": "TCP", 00:17:33.975 "adrfam": "IPv4", 00:17:33.975 "traddr": "10.0.0.2", 00:17:33.975 "trsvcid": "4420" 00:17:33.975 }, 00:17:33.975 "peer_address": { 00:17:33.975 "trtype": "TCP", 00:17:33.975 "adrfam": "IPv4", 00:17:33.975 "traddr": "10.0.0.1", 00:17:33.975 "trsvcid": "41328" 00:17:33.975 }, 00:17:33.975 "auth": { 00:17:33.975 "state": "completed", 00:17:33.975 "digest": "sha256", 00:17:33.975 "dhgroup": "ffdhe3072" 00:17:33.975 } 00:17:33.975 } 00:17:33.975 ]' 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.975 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.234 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:17:34.234 13:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:17:34.801 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.801 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.801 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.801 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.801 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.801 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.801 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.801 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:35.061 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:35.061 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.061 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:35.061 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:35.061 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.061 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.061 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.061 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.061 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.061 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.061 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.061 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.061 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.319 00:17:35.319 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.319 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.319 13:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.319 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.319 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.319 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.319 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.579 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.579 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.579 { 00:17:35.579 "cntlid": 21, 00:17:35.579 "qid": 0, 00:17:35.579 "state": "enabled", 00:17:35.579 "thread": "nvmf_tgt_poll_group_000", 00:17:35.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:35.579 "listen_address": { 00:17:35.579 "trtype": "TCP", 00:17:35.579 "adrfam": "IPv4", 00:17:35.579 "traddr": "10.0.0.2", 00:17:35.579 "trsvcid": "4420" 00:17:35.579 }, 00:17:35.579 "peer_address": { 00:17:35.579 "trtype": "TCP", 00:17:35.579 "adrfam": "IPv4", 00:17:35.579 "traddr": "10.0.0.1", 00:17:35.579 "trsvcid": "50440" 00:17:35.579 }, 00:17:35.579 "auth": { 00:17:35.579 "state": "completed", 00:17:35.579 "digest": "sha256", 00:17:35.579 "dhgroup": "ffdhe3072" 00:17:35.579 } 00:17:35.579 } 00:17:35.579 ]' 00:17:35.579 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.579 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.579 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.579 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.579 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.579 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.579 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.579 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.837 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:17:35.837 13:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.403 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.688 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.688 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.688 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.688 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.688 00:17:36.688 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.688 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.688 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.946 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.946 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.946 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.946 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.946 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.946 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.947 { 00:17:36.947 "cntlid": 23, 00:17:36.947 "qid": 0, 00:17:36.947 "state": "enabled", 00:17:36.947 "thread": "nvmf_tgt_poll_group_000", 00:17:36.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:36.947 "listen_address": { 00:17:36.947 "trtype": "TCP", 00:17:36.947 "adrfam": "IPv4", 00:17:36.947 "traddr": "10.0.0.2", 00:17:36.947 "trsvcid": "4420" 00:17:36.947 }, 00:17:36.947 "peer_address": { 00:17:36.947 "trtype": "TCP", 00:17:36.947 "adrfam": "IPv4", 00:17:36.947 "traddr": "10.0.0.1", 00:17:36.947 "trsvcid": "50460" 00:17:36.947 }, 00:17:36.947 "auth": { 00:17:36.947 "state": "completed", 00:17:36.947 "digest": "sha256", 00:17:36.947 "dhgroup": "ffdhe3072" 00:17:36.947 } 00:17:36.947 } 00:17:36.947 ]' 00:17:36.947 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.947 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.947 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.205 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:37.205 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.205 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.205 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.205 13:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.205 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:17:37.205 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:17:37.772 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.772 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.772 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.032 13:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.290 00:17:38.290 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.290 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.290 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.548 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.548 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.548 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.548 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.548 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.548 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.548 { 00:17:38.548 "cntlid": 25, 00:17:38.548 "qid": 0, 00:17:38.548 "state": "enabled", 00:17:38.548 "thread": "nvmf_tgt_poll_group_000", 00:17:38.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:38.548 "listen_address": { 00:17:38.548 "trtype": "TCP", 00:17:38.548 "adrfam": "IPv4", 00:17:38.548 "traddr": "10.0.0.2", 00:17:38.548 "trsvcid": "4420" 00:17:38.548 }, 00:17:38.548 "peer_address": { 00:17:38.548 "trtype": "TCP", 00:17:38.548 "adrfam": "IPv4", 00:17:38.548 "traddr": "10.0.0.1", 00:17:38.548 "trsvcid": "50484" 00:17:38.548 }, 00:17:38.548 "auth": { 00:17:38.548 "state": "completed", 00:17:38.548 "digest": "sha256", 00:17:38.548 "dhgroup": "ffdhe4096" 00:17:38.548 } 00:17:38.548 } 00:17:38.548 ]' 00:17:38.548 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.548 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.548 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.548 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.806 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.806 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.806 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.806 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.806 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:38.806 13:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:39.374 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.374 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.374 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.374 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.634 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.893 00:17:39.893 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.893 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.893 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.152 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.152 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.152 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.152 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.152 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.152 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.152 { 00:17:40.152 "cntlid": 27, 00:17:40.152 "qid": 0, 00:17:40.152 "state": "enabled", 00:17:40.152 "thread": "nvmf_tgt_poll_group_000", 00:17:40.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:40.152 "listen_address": { 00:17:40.152 "trtype": "TCP", 00:17:40.152 "adrfam": "IPv4", 00:17:40.152 "traddr": "10.0.0.2", 00:17:40.152 "trsvcid": "4420" 00:17:40.152 }, 00:17:40.152 "peer_address": { 00:17:40.152 "trtype": "TCP", 00:17:40.152 "adrfam": "IPv4", 00:17:40.152 "traddr": "10.0.0.1", 00:17:40.152 "trsvcid": "50514" 00:17:40.152 }, 00:17:40.152 "auth": { 00:17:40.152 "state": "completed", 00:17:40.152 "digest": "sha256", 00:17:40.152 "dhgroup": "ffdhe4096" 00:17:40.152 } 00:17:40.152 } 00:17:40.152 ]' 00:17:40.152 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.152 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.152 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.152 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.152 13:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.412 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.412 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.412 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.412 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:17:40.412 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:17:40.980 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.980 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.980 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.980 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.980 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.980 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.980 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:40.980 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.239 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:41.239 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.239 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:41.239 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:41.239 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.239 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.239 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.239 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.239 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.239 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.239 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.239 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.239 13:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.498 00:17:41.498 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.498 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.498 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.758 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.758 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.758 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.758 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.758 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.758 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.758 { 00:17:41.758 "cntlid": 29, 00:17:41.758 "qid": 0, 00:17:41.758 "state": "enabled", 00:17:41.758 "thread": "nvmf_tgt_poll_group_000", 00:17:41.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:41.758 "listen_address": { 00:17:41.758 "trtype": "TCP", 00:17:41.758 "adrfam": "IPv4", 00:17:41.758 "traddr": "10.0.0.2", 00:17:41.758 "trsvcid": "4420" 00:17:41.758 }, 00:17:41.758 "peer_address": { 00:17:41.758 "trtype": "TCP", 00:17:41.758 "adrfam": "IPv4", 00:17:41.758 "traddr": "10.0.0.1", 00:17:41.758 "trsvcid": "50520" 00:17:41.758 }, 00:17:41.758 "auth": { 00:17:41.758 "state": "completed", 00:17:41.758 "digest": "sha256", 00:17:41.758 "dhgroup": "ffdhe4096" 00:17:41.758 } 00:17:41.758 } 00:17:41.758 ]' 00:17:41.758 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.758 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.758 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.758 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.758 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.017 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.017 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.017 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.017 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:17:42.017 13:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:17:42.586 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.586 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.586 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.586 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.586 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.586 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.586 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.586 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.845 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:42.845 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.845 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:42.845 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:42.845 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.845 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.845 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:42.845 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.845 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.845 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.845 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.845 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.846 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.105 00:17:43.105 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.105 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.105 13:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.364 { 00:17:43.364 "cntlid": 31, 00:17:43.364 "qid": 0, 00:17:43.364 "state": "enabled", 00:17:43.364 "thread": "nvmf_tgt_poll_group_000", 00:17:43.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:43.364 "listen_address": { 00:17:43.364 "trtype": "TCP", 00:17:43.364 "adrfam": "IPv4", 00:17:43.364 "traddr": "10.0.0.2", 00:17:43.364 "trsvcid": "4420" 00:17:43.364 }, 00:17:43.364 "peer_address": { 00:17:43.364 "trtype": "TCP", 00:17:43.364 "adrfam": "IPv4", 00:17:43.364 "traddr": "10.0.0.1", 00:17:43.364 "trsvcid": "50544" 00:17:43.364 }, 00:17:43.364 "auth": { 00:17:43.364 "state": "completed", 00:17:43.364 "digest": "sha256", 00:17:43.364 "dhgroup": "ffdhe4096" 00:17:43.364 } 00:17:43.364 } 00:17:43.364 ]' 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.364 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.623 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:17:43.623 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:17:44.192 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.192 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.192 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.192 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.192 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.192 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.192 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.192 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.192 13:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.451 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:44.451 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.451 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:44.451 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:44.451 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:44.451 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.451 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.451 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.451 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.451 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.451 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.451 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.451 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.711 00:17:44.711 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.711 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.711 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.971 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.971 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.971 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.971 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.971 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.971 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.971 { 00:17:44.971 "cntlid": 33, 00:17:44.971 "qid": 0, 00:17:44.971 "state": "enabled", 00:17:44.971 "thread": "nvmf_tgt_poll_group_000", 00:17:44.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:44.971 "listen_address": { 00:17:44.971 "trtype": "TCP", 00:17:44.971 "adrfam": "IPv4", 00:17:44.971 "traddr": "10.0.0.2", 00:17:44.971 "trsvcid": "4420" 00:17:44.971 }, 00:17:44.971 "peer_address": { 00:17:44.971 "trtype": "TCP", 00:17:44.971 "adrfam": "IPv4", 00:17:44.971 "traddr": "10.0.0.1", 00:17:44.971 "trsvcid": "44076" 00:17:44.971 }, 00:17:44.971 "auth": { 00:17:44.971 "state": "completed", 00:17:44.971 "digest": "sha256", 00:17:44.971 "dhgroup": "ffdhe6144" 00:17:44.971 } 00:17:44.971 } 00:17:44.971 ]' 00:17:44.971 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.971 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.971 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.230 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.230 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.230 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.230 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.230 13:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.488 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:45.488 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:46.056 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.057 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.057 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.057 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.057 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.057 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.057 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.057 13:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.625 00:17:46.625 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.625 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.625 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.625 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.625 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.625 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.625 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.625 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.625 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.625 { 00:17:46.625 "cntlid": 35, 00:17:46.625 "qid": 0, 00:17:46.625 "state": "enabled", 00:17:46.625 "thread": "nvmf_tgt_poll_group_000", 00:17:46.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:46.625 "listen_address": { 00:17:46.625 "trtype": "TCP", 00:17:46.625 "adrfam": "IPv4", 00:17:46.625 "traddr": "10.0.0.2", 00:17:46.625 "trsvcid": "4420" 00:17:46.625 }, 00:17:46.625 "peer_address": { 00:17:46.625 "trtype": "TCP", 00:17:46.625 "adrfam": "IPv4", 00:17:46.625 "traddr": "10.0.0.1", 00:17:46.625 "trsvcid": "44108" 00:17:46.625 }, 00:17:46.625 "auth": { 00:17:46.625 "state": "completed", 00:17:46.625 "digest": "sha256", 00:17:46.625 "dhgroup": "ffdhe6144" 00:17:46.625 } 00:17:46.625 } 00:17:46.625 ]' 00:17:46.625 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.884 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.884 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.884 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.884 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.884 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.884 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.884 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.143 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:17:47.143 13:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.711 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.278 00:17:48.278 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.278 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.278 13:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.278 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.278 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.278 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.278 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.278 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.278 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.278 { 00:17:48.278 "cntlid": 37, 00:17:48.278 "qid": 0, 00:17:48.278 "state": "enabled", 00:17:48.278 "thread": "nvmf_tgt_poll_group_000", 00:17:48.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:48.278 "listen_address": { 00:17:48.278 "trtype": "TCP", 00:17:48.278 "adrfam": "IPv4", 00:17:48.278 "traddr": "10.0.0.2", 00:17:48.278 "trsvcid": "4420" 00:17:48.278 }, 00:17:48.278 "peer_address": { 00:17:48.278 "trtype": "TCP", 00:17:48.278 "adrfam": "IPv4", 00:17:48.278 "traddr": "10.0.0.1", 00:17:48.278 "trsvcid": "44144" 00:17:48.278 }, 00:17:48.278 "auth": { 00:17:48.278 "state": "completed", 00:17:48.278 "digest": "sha256", 00:17:48.278 "dhgroup": "ffdhe6144" 00:17:48.278 } 00:17:48.278 } 00:17:48.278 ]' 00:17:48.278 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.537 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.537 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.537 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.537 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.537 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.537 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.537 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.796 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:17:48.796 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:17:49.363 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.363 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.363 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.363 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.363 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.363 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.363 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.363 13:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.623 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:49.623 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.623 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:49.623 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:49.623 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.623 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.623 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:49.623 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.623 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.623 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.623 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.623 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.623 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.883 00:17:49.883 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.883 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.883 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.143 { 00:17:50.143 "cntlid": 39, 00:17:50.143 "qid": 0, 00:17:50.143 "state": "enabled", 00:17:50.143 "thread": "nvmf_tgt_poll_group_000", 00:17:50.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:50.143 "listen_address": { 00:17:50.143 "trtype": "TCP", 00:17:50.143 "adrfam": "IPv4", 00:17:50.143 "traddr": "10.0.0.2", 00:17:50.143 "trsvcid": "4420" 00:17:50.143 }, 00:17:50.143 "peer_address": { 00:17:50.143 "trtype": "TCP", 00:17:50.143 "adrfam": "IPv4", 00:17:50.143 "traddr": "10.0.0.1", 00:17:50.143 "trsvcid": "44170" 00:17:50.143 }, 00:17:50.143 "auth": { 00:17:50.143 "state": "completed", 00:17:50.143 "digest": "sha256", 00:17:50.143 "dhgroup": "ffdhe6144" 00:17:50.143 } 00:17:50.143 } 00:17:50.143 ]' 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.143 13:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.401 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:17:50.401 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:17:50.969 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.969 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.969 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.969 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.969 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.969 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.969 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.969 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.969 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:51.229 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:51.229 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.229 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:51.229 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:51.229 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:51.229 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.229 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.229 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.229 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.229 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.229 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.229 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.229 13:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.799 00:17:51.799 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.799 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.799 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.799 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.799 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.799 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.799 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.059 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.059 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.059 { 00:17:52.059 "cntlid": 41, 00:17:52.059 "qid": 0, 00:17:52.059 "state": "enabled", 00:17:52.059 "thread": "nvmf_tgt_poll_group_000", 00:17:52.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:52.059 "listen_address": { 00:17:52.059 "trtype": "TCP", 00:17:52.059 "adrfam": "IPv4", 00:17:52.059 "traddr": "10.0.0.2", 00:17:52.059 "trsvcid": "4420" 00:17:52.059 }, 00:17:52.059 "peer_address": { 00:17:52.059 "trtype": "TCP", 00:17:52.059 "adrfam": "IPv4", 00:17:52.059 "traddr": "10.0.0.1", 00:17:52.059 "trsvcid": "44190" 00:17:52.059 }, 00:17:52.059 "auth": { 00:17:52.059 "state": "completed", 00:17:52.059 "digest": "sha256", 00:17:52.059 "dhgroup": "ffdhe8192" 00:17:52.059 } 00:17:52.059 } 00:17:52.059 ]' 00:17:52.059 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.059 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.059 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.059 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.059 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.059 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.059 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.059 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.319 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:52.319 13:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.887 13:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.456 00:17:53.456 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.456 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.456 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.715 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.715 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.715 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.715 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.715 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.715 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.715 { 00:17:53.715 "cntlid": 43, 00:17:53.715 "qid": 0, 00:17:53.715 "state": "enabled", 00:17:53.715 "thread": "nvmf_tgt_poll_group_000", 00:17:53.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:53.715 "listen_address": { 00:17:53.715 "trtype": "TCP", 00:17:53.715 "adrfam": "IPv4", 00:17:53.715 "traddr": "10.0.0.2", 00:17:53.715 "trsvcid": "4420" 00:17:53.715 }, 00:17:53.715 "peer_address": { 00:17:53.715 "trtype": "TCP", 00:17:53.715 "adrfam": "IPv4", 00:17:53.715 "traddr": "10.0.0.1", 00:17:53.715 "trsvcid": "44216" 00:17:53.715 }, 00:17:53.715 "auth": { 00:17:53.715 "state": "completed", 00:17:53.715 "digest": "sha256", 00:17:53.715 "dhgroup": "ffdhe8192" 00:17:53.715 } 00:17:53.715 } 00:17:53.715 ]' 00:17:53.715 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.715 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.715 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.715 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:53.715 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.974 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.974 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.974 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.974 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:17:53.974 13:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:17:54.543 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.543 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.543 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.543 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.543 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.543 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.543 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:54.543 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:54.802 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:54.802 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.802 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.802 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:54.802 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.802 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.802 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.802 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.802 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.802 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.802 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.802 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.803 13:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.461 00:17:55.461 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.462 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.462 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.462 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.462 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.462 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.462 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.462 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.462 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.462 { 00:17:55.462 "cntlid": 45, 00:17:55.462 "qid": 0, 00:17:55.462 "state": "enabled", 00:17:55.462 "thread": "nvmf_tgt_poll_group_000", 00:17:55.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:55.462 "listen_address": { 00:17:55.462 "trtype": "TCP", 00:17:55.462 "adrfam": "IPv4", 00:17:55.462 "traddr": "10.0.0.2", 00:17:55.462 "trsvcid": "4420" 00:17:55.462 }, 00:17:55.462 "peer_address": { 00:17:55.462 "trtype": "TCP", 00:17:55.462 "adrfam": "IPv4", 00:17:55.462 "traddr": "10.0.0.1", 00:17:55.462 "trsvcid": "50700" 00:17:55.462 }, 00:17:55.462 "auth": { 00:17:55.462 "state": "completed", 00:17:55.462 "digest": "sha256", 00:17:55.462 "dhgroup": "ffdhe8192" 00:17:55.462 } 00:17:55.462 } 00:17:55.462 ]' 00:17:55.462 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.743 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.743 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.743 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.743 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.743 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.743 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.743 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.743 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:17:55.743 13:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:17:56.360 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.360 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.360 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.360 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.360 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.360 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.360 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:56.360 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:56.620 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:56.620 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.620 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:56.620 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:56.620 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:56.620 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.620 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:56.620 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.620 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.620 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.620 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:56.620 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.620 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.189 00:17:57.189 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.189 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.189 13:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.449 { 00:17:57.449 "cntlid": 47, 00:17:57.449 "qid": 0, 00:17:57.449 "state": "enabled", 00:17:57.449 "thread": "nvmf_tgt_poll_group_000", 00:17:57.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:57.449 "listen_address": { 00:17:57.449 "trtype": "TCP", 00:17:57.449 "adrfam": "IPv4", 00:17:57.449 "traddr": "10.0.0.2", 00:17:57.449 "trsvcid": "4420" 00:17:57.449 }, 00:17:57.449 "peer_address": { 00:17:57.449 "trtype": "TCP", 00:17:57.449 "adrfam": "IPv4", 00:17:57.449 "traddr": "10.0.0.1", 00:17:57.449 "trsvcid": "50744" 00:17:57.449 }, 00:17:57.449 "auth": { 00:17:57.449 "state": "completed", 00:17:57.449 "digest": "sha256", 00:17:57.449 "dhgroup": "ffdhe8192" 00:17:57.449 } 00:17:57.449 } 00:17:57.449 ]' 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.449 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.709 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:17:57.709 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:17:58.278 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.278 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.278 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.278 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.278 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.278 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:58.278 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.278 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.278 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:58.278 13:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:58.537 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:58.537 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.537 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:58.537 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:58.537 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.537 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.537 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.537 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.537 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.537 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.537 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.538 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.538 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.797 00:17:58.797 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.797 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.797 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.797 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.797 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.797 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.797 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.797 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.797 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.797 { 00:17:58.797 "cntlid": 49, 00:17:58.797 "qid": 0, 00:17:58.797 "state": "enabled", 00:17:58.797 "thread": "nvmf_tgt_poll_group_000", 00:17:58.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:58.797 "listen_address": { 00:17:58.797 "trtype": "TCP", 00:17:58.797 "adrfam": "IPv4", 00:17:58.797 "traddr": "10.0.0.2", 00:17:58.797 "trsvcid": "4420" 00:17:58.797 }, 00:17:58.797 "peer_address": { 00:17:58.797 "trtype": "TCP", 00:17:58.797 "adrfam": "IPv4", 00:17:58.797 "traddr": "10.0.0.1", 00:17:58.797 "trsvcid": "50770" 00:17:58.797 }, 00:17:58.797 "auth": { 00:17:58.797 "state": "completed", 00:17:58.797 "digest": "sha384", 00:17:58.797 "dhgroup": "null" 00:17:58.797 } 00:17:58.797 } 00:17:58.797 ]' 00:17:58.797 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.056 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.056 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.056 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:59.056 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.056 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.056 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.056 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.315 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:59.316 13:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:17:59.884 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.884 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:59.884 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.884 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.884 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.884 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.884 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:59.884 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:00.143 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:00.143 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.144 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:00.144 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:00.144 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:00.144 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.144 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.144 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.144 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.144 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.144 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.144 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.144 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.144 00:18:00.404 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.404 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.404 13:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.404 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.404 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.404 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.404 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.404 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.404 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.404 { 00:18:00.404 "cntlid": 51, 00:18:00.404 "qid": 0, 00:18:00.404 "state": "enabled", 00:18:00.404 "thread": "nvmf_tgt_poll_group_000", 00:18:00.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:00.404 "listen_address": { 00:18:00.404 "trtype": "TCP", 00:18:00.404 "adrfam": "IPv4", 00:18:00.404 "traddr": "10.0.0.2", 00:18:00.404 "trsvcid": "4420" 00:18:00.404 }, 00:18:00.404 "peer_address": { 00:18:00.404 "trtype": "TCP", 00:18:00.404 "adrfam": "IPv4", 00:18:00.404 "traddr": "10.0.0.1", 00:18:00.404 "trsvcid": "50794" 00:18:00.404 }, 00:18:00.404 "auth": { 00:18:00.404 "state": "completed", 00:18:00.404 "digest": "sha384", 00:18:00.404 "dhgroup": "null" 00:18:00.404 } 00:18:00.404 } 00:18:00.404 ]' 00:18:00.404 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.662 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.662 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.662 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:00.662 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.662 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.662 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.662 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.922 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:00.922 13:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.492 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.751 00:18:01.751 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.751 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.751 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.010 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.010 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.010 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.010 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.010 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.010 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.010 { 00:18:02.010 "cntlid": 53, 00:18:02.010 "qid": 0, 00:18:02.010 "state": "enabled", 00:18:02.010 "thread": "nvmf_tgt_poll_group_000", 00:18:02.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:02.010 "listen_address": { 00:18:02.010 "trtype": "TCP", 00:18:02.010 "adrfam": "IPv4", 00:18:02.010 "traddr": "10.0.0.2", 00:18:02.010 "trsvcid": "4420" 00:18:02.010 }, 00:18:02.010 "peer_address": { 00:18:02.010 "trtype": "TCP", 00:18:02.010 "adrfam": "IPv4", 00:18:02.010 "traddr": "10.0.0.1", 00:18:02.010 "trsvcid": "50822" 00:18:02.010 }, 00:18:02.010 "auth": { 00:18:02.010 "state": "completed", 00:18:02.010 "digest": "sha384", 00:18:02.010 "dhgroup": "null" 00:18:02.010 } 00:18:02.010 } 00:18:02.010 ]' 00:18:02.010 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.010 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.010 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.269 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:02.269 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.269 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.269 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.269 13:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.269 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:02.269 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:02.837 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.837 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.837 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.837 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.837 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.837 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.837 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.837 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:03.096 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:03.096 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.096 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.096 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:03.096 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.096 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.096 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:03.096 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.096 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.096 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.096 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.096 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.096 13:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.356 00:18:03.356 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.356 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.356 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.616 { 00:18:03.616 "cntlid": 55, 00:18:03.616 "qid": 0, 00:18:03.616 "state": "enabled", 00:18:03.616 "thread": "nvmf_tgt_poll_group_000", 00:18:03.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:03.616 "listen_address": { 00:18:03.616 "trtype": "TCP", 00:18:03.616 "adrfam": "IPv4", 00:18:03.616 "traddr": "10.0.0.2", 00:18:03.616 "trsvcid": "4420" 00:18:03.616 }, 00:18:03.616 "peer_address": { 00:18:03.616 "trtype": "TCP", 00:18:03.616 "adrfam": "IPv4", 00:18:03.616 "traddr": "10.0.0.1", 00:18:03.616 "trsvcid": "50862" 00:18:03.616 }, 00:18:03.616 "auth": { 00:18:03.616 "state": "completed", 00:18:03.616 "digest": "sha384", 00:18:03.616 "dhgroup": "null" 00:18:03.616 } 00:18:03.616 } 00:18:03.616 ]' 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.616 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.874 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:03.874 13:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:04.443 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.444 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.444 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.444 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.444 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.444 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.444 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.444 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:04.444 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:04.732 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:04.732 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.732 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:04.732 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:04.732 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:04.732 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.732 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.732 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.732 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.732 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.732 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.732 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.732 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.991 00:18:04.991 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.991 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.991 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.250 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.250 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.250 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.250 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.250 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.250 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.250 { 00:18:05.250 "cntlid": 57, 00:18:05.250 "qid": 0, 00:18:05.250 "state": "enabled", 00:18:05.250 "thread": "nvmf_tgt_poll_group_000", 00:18:05.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:05.250 "listen_address": { 00:18:05.250 "trtype": "TCP", 00:18:05.250 "adrfam": "IPv4", 00:18:05.250 "traddr": "10.0.0.2", 00:18:05.250 "trsvcid": "4420" 00:18:05.250 }, 00:18:05.250 "peer_address": { 00:18:05.250 "trtype": "TCP", 00:18:05.250 "adrfam": "IPv4", 00:18:05.250 "traddr": "10.0.0.1", 00:18:05.250 "trsvcid": "52826" 00:18:05.250 }, 00:18:05.250 "auth": { 00:18:05.250 "state": "completed", 00:18:05.250 "digest": "sha384", 00:18:05.250 "dhgroup": "ffdhe2048" 00:18:05.250 } 00:18:05.250 } 00:18:05.250 ]' 00:18:05.250 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.250 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.250 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.250 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.250 13:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.250 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.250 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.250 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.509 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:05.509 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:06.076 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.076 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.076 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.076 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.076 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.076 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.076 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:06.076 13:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:06.335 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:06.335 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.335 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:06.335 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:06.335 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:06.335 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.336 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.336 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.336 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.336 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.336 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.336 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.336 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.595 00:18:06.595 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.595 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.595 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.854 { 00:18:06.854 "cntlid": 59, 00:18:06.854 "qid": 0, 00:18:06.854 "state": "enabled", 00:18:06.854 "thread": "nvmf_tgt_poll_group_000", 00:18:06.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:06.854 "listen_address": { 00:18:06.854 "trtype": "TCP", 00:18:06.854 "adrfam": "IPv4", 00:18:06.854 "traddr": "10.0.0.2", 00:18:06.854 "trsvcid": "4420" 00:18:06.854 }, 00:18:06.854 "peer_address": { 00:18:06.854 "trtype": "TCP", 00:18:06.854 "adrfam": "IPv4", 00:18:06.854 "traddr": "10.0.0.1", 00:18:06.854 "trsvcid": "52850" 00:18:06.854 }, 00:18:06.854 "auth": { 00:18:06.854 "state": "completed", 00:18:06.854 "digest": "sha384", 00:18:06.854 "dhgroup": "ffdhe2048" 00:18:06.854 } 00:18:06.854 } 00:18:06.854 ]' 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.854 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.113 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:07.114 13:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:07.682 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.682 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.682 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.682 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.682 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.682 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.682 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:07.682 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:07.942 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:07.942 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.942 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:07.942 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:07.942 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.942 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.942 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.942 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.942 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.942 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.942 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.942 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.942 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.202 00:18:08.202 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.202 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.202 13:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.461 { 00:18:08.461 "cntlid": 61, 00:18:08.461 "qid": 0, 00:18:08.461 "state": "enabled", 00:18:08.461 "thread": "nvmf_tgt_poll_group_000", 00:18:08.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:08.461 "listen_address": { 00:18:08.461 "trtype": "TCP", 00:18:08.461 "adrfam": "IPv4", 00:18:08.461 "traddr": "10.0.0.2", 00:18:08.461 "trsvcid": "4420" 00:18:08.461 }, 00:18:08.461 "peer_address": { 00:18:08.461 "trtype": "TCP", 00:18:08.461 "adrfam": "IPv4", 00:18:08.461 "traddr": "10.0.0.1", 00:18:08.461 "trsvcid": "52878" 00:18:08.461 }, 00:18:08.461 "auth": { 00:18:08.461 "state": "completed", 00:18:08.461 "digest": "sha384", 00:18:08.461 "dhgroup": "ffdhe2048" 00:18:08.461 } 00:18:08.461 } 00:18:08.461 ]' 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.461 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.720 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:08.720 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:09.289 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.289 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.289 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.289 13:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.289 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.289 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.289 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:09.289 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:09.548 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:09.548 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.548 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:09.548 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:09.548 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:09.548 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.548 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:09.548 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.548 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.548 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.548 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:09.548 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.548 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.808 00:18:09.808 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.808 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.808 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.067 { 00:18:10.067 "cntlid": 63, 00:18:10.067 "qid": 0, 00:18:10.067 "state": "enabled", 00:18:10.067 "thread": "nvmf_tgt_poll_group_000", 00:18:10.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:10.067 "listen_address": { 00:18:10.067 "trtype": "TCP", 00:18:10.067 "adrfam": "IPv4", 00:18:10.067 "traddr": "10.0.0.2", 00:18:10.067 "trsvcid": "4420" 00:18:10.067 }, 00:18:10.067 "peer_address": { 00:18:10.067 "trtype": "TCP", 00:18:10.067 "adrfam": "IPv4", 00:18:10.067 "traddr": "10.0.0.1", 00:18:10.067 "trsvcid": "52896" 00:18:10.067 }, 00:18:10.067 "auth": { 00:18:10.067 "state": "completed", 00:18:10.067 "digest": "sha384", 00:18:10.067 "dhgroup": "ffdhe2048" 00:18:10.067 } 00:18:10.067 } 00:18:10.067 ]' 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.067 13:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.326 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:10.326 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:10.893 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.893 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.893 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.893 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.893 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.893 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.893 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.893 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:10.893 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:11.153 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:11.153 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.153 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:11.153 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:11.153 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:11.153 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.153 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.153 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.153 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.153 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.153 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.153 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.153 13:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.412 00:18:11.412 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.412 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.412 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.672 { 00:18:11.672 "cntlid": 65, 00:18:11.672 "qid": 0, 00:18:11.672 "state": "enabled", 00:18:11.672 "thread": "nvmf_tgt_poll_group_000", 00:18:11.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:11.672 "listen_address": { 00:18:11.672 "trtype": "TCP", 00:18:11.672 "adrfam": "IPv4", 00:18:11.672 "traddr": "10.0.0.2", 00:18:11.672 "trsvcid": "4420" 00:18:11.672 }, 00:18:11.672 "peer_address": { 00:18:11.672 "trtype": "TCP", 00:18:11.672 "adrfam": "IPv4", 00:18:11.672 "traddr": "10.0.0.1", 00:18:11.672 "trsvcid": "52942" 00:18:11.672 }, 00:18:11.672 "auth": { 00:18:11.672 "state": "completed", 00:18:11.672 "digest": "sha384", 00:18:11.672 "dhgroup": "ffdhe3072" 00:18:11.672 } 00:18:11.672 } 00:18:11.672 ]' 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.672 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.931 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:11.931 13:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:12.499 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.499 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.499 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.499 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.499 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.499 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.499 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:12.499 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:12.759 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:12.759 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.759 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.759 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:12.759 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.759 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.759 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.759 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.759 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.759 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.759 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.759 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.759 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.017 00:18:13.017 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.018 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.018 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.275 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.275 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.275 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.275 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.275 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.275 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.275 { 00:18:13.275 "cntlid": 67, 00:18:13.275 "qid": 0, 00:18:13.275 "state": "enabled", 00:18:13.275 "thread": "nvmf_tgt_poll_group_000", 00:18:13.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:13.275 "listen_address": { 00:18:13.275 "trtype": "TCP", 00:18:13.275 "adrfam": "IPv4", 00:18:13.275 "traddr": "10.0.0.2", 00:18:13.275 "trsvcid": "4420" 00:18:13.275 }, 00:18:13.275 "peer_address": { 00:18:13.275 "trtype": "TCP", 00:18:13.275 "adrfam": "IPv4", 00:18:13.275 "traddr": "10.0.0.1", 00:18:13.275 "trsvcid": "52980" 00:18:13.275 }, 00:18:13.275 "auth": { 00:18:13.275 "state": "completed", 00:18:13.275 "digest": "sha384", 00:18:13.275 "dhgroup": "ffdhe3072" 00:18:13.275 } 00:18:13.275 } 00:18:13.275 ]' 00:18:13.275 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.275 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.275 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.275 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.275 13:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.275 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.275 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.275 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.532 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:13.533 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:14.098 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.098 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:14.098 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.098 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.098 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.098 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.098 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:14.098 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:14.355 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:14.355 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.355 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:14.355 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:14.355 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:14.355 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.355 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.355 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.355 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.355 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.355 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.355 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.356 13:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.613 00:18:14.613 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.613 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.613 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.871 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.871 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.871 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.871 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.871 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.871 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.871 { 00:18:14.871 "cntlid": 69, 00:18:14.871 "qid": 0, 00:18:14.871 "state": "enabled", 00:18:14.871 "thread": "nvmf_tgt_poll_group_000", 00:18:14.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:14.871 "listen_address": { 00:18:14.871 "trtype": "TCP", 00:18:14.871 "adrfam": "IPv4", 00:18:14.871 "traddr": "10.0.0.2", 00:18:14.871 "trsvcid": "4420" 00:18:14.871 }, 00:18:14.871 "peer_address": { 00:18:14.871 "trtype": "TCP", 00:18:14.871 "adrfam": "IPv4", 00:18:14.871 "traddr": "10.0.0.1", 00:18:14.871 "trsvcid": "41308" 00:18:14.871 }, 00:18:14.871 "auth": { 00:18:14.871 "state": "completed", 00:18:14.871 "digest": "sha384", 00:18:14.871 "dhgroup": "ffdhe3072" 00:18:14.871 } 00:18:14.871 } 00:18:14.871 ]' 00:18:14.871 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.871 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.872 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.872 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.872 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.872 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.872 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.872 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.130 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:15.130 13:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:15.697 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.697 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.697 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.697 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.697 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.697 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.697 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:15.697 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:15.955 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:15.955 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.955 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.955 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:15.955 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.955 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.955 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:15.955 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.955 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.955 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.955 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.955 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.955 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.213 00:18:16.213 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.213 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.213 13:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.213 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.213 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.213 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.213 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.213 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.213 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.213 { 00:18:16.213 "cntlid": 71, 00:18:16.213 "qid": 0, 00:18:16.213 "state": "enabled", 00:18:16.213 "thread": "nvmf_tgt_poll_group_000", 00:18:16.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:16.213 "listen_address": { 00:18:16.213 "trtype": "TCP", 00:18:16.213 "adrfam": "IPv4", 00:18:16.213 "traddr": "10.0.0.2", 00:18:16.213 "trsvcid": "4420" 00:18:16.213 }, 00:18:16.213 "peer_address": { 00:18:16.213 "trtype": "TCP", 00:18:16.213 "adrfam": "IPv4", 00:18:16.213 "traddr": "10.0.0.1", 00:18:16.213 "trsvcid": "41334" 00:18:16.213 }, 00:18:16.213 "auth": { 00:18:16.213 "state": "completed", 00:18:16.213 "digest": "sha384", 00:18:16.213 "dhgroup": "ffdhe3072" 00:18:16.213 } 00:18:16.213 } 00:18:16.213 ]' 00:18:16.213 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.471 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.471 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.471 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:16.471 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.471 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.471 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.471 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.730 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:16.730 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:17.299 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.299 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:17.299 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.299 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.299 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.299 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.299 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.299 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:17.299 13:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:17.299 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:17.299 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.299 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.299 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:17.299 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:17.299 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.299 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.299 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.299 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.299 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.299 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.299 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.299 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.558 00:18:17.817 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.817 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.817 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.817 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.817 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.817 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.817 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.817 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.817 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.817 { 00:18:17.817 "cntlid": 73, 00:18:17.817 "qid": 0, 00:18:17.817 "state": "enabled", 00:18:17.817 "thread": "nvmf_tgt_poll_group_000", 00:18:17.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:17.817 "listen_address": { 00:18:17.817 "trtype": "TCP", 00:18:17.817 "adrfam": "IPv4", 00:18:17.817 "traddr": "10.0.0.2", 00:18:17.817 "trsvcid": "4420" 00:18:17.817 }, 00:18:17.817 "peer_address": { 00:18:17.817 "trtype": "TCP", 00:18:17.817 "adrfam": "IPv4", 00:18:17.817 "traddr": "10.0.0.1", 00:18:17.817 "trsvcid": "41370" 00:18:17.817 }, 00:18:17.817 "auth": { 00:18:17.817 "state": "completed", 00:18:17.817 "digest": "sha384", 00:18:17.817 "dhgroup": "ffdhe4096" 00:18:17.817 } 00:18:17.817 } 00:18:17.817 ]' 00:18:17.817 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.077 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.077 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.077 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.077 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.077 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.077 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.077 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.077 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:18.077 13:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:18.646 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.905 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.164 00:18:19.423 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.423 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.423 13:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.423 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.423 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.423 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.423 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.423 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.423 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.423 { 00:18:19.423 "cntlid": 75, 00:18:19.423 "qid": 0, 00:18:19.423 "state": "enabled", 00:18:19.423 "thread": "nvmf_tgt_poll_group_000", 00:18:19.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:19.423 "listen_address": { 00:18:19.423 "trtype": "TCP", 00:18:19.423 "adrfam": "IPv4", 00:18:19.423 "traddr": "10.0.0.2", 00:18:19.423 "trsvcid": "4420" 00:18:19.423 }, 00:18:19.423 "peer_address": { 00:18:19.423 "trtype": "TCP", 00:18:19.423 "adrfam": "IPv4", 00:18:19.423 "traddr": "10.0.0.1", 00:18:19.423 "trsvcid": "41388" 00:18:19.423 }, 00:18:19.423 "auth": { 00:18:19.423 "state": "completed", 00:18:19.423 "digest": "sha384", 00:18:19.423 "dhgroup": "ffdhe4096" 00:18:19.423 } 00:18:19.423 } 00:18:19.423 ]' 00:18:19.423 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.423 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.423 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.683 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.683 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.683 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.683 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.683 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.683 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:19.683 13:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:20.251 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.510 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.769 00:18:20.769 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.769 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.028 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.028 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.028 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.028 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.028 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.028 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.028 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.028 { 00:18:21.028 "cntlid": 77, 00:18:21.028 "qid": 0, 00:18:21.028 "state": "enabled", 00:18:21.028 "thread": "nvmf_tgt_poll_group_000", 00:18:21.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:21.028 "listen_address": { 00:18:21.028 "trtype": "TCP", 00:18:21.028 "adrfam": "IPv4", 00:18:21.028 "traddr": "10.0.0.2", 00:18:21.028 "trsvcid": "4420" 00:18:21.028 }, 00:18:21.028 "peer_address": { 00:18:21.028 "trtype": "TCP", 00:18:21.028 "adrfam": "IPv4", 00:18:21.028 "traddr": "10.0.0.1", 00:18:21.028 "trsvcid": "41412" 00:18:21.028 }, 00:18:21.028 "auth": { 00:18:21.028 "state": "completed", 00:18:21.028 "digest": "sha384", 00:18:21.028 "dhgroup": "ffdhe4096" 00:18:21.028 } 00:18:21.028 } 00:18:21.028 ]' 00:18:21.028 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.028 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.028 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.287 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:21.287 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.287 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.287 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.287 13:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.546 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:21.546 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.116 13:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.375 00:18:22.635 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.635 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.635 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.635 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.635 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.635 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.635 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.635 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.635 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.635 { 00:18:22.635 "cntlid": 79, 00:18:22.635 "qid": 0, 00:18:22.635 "state": "enabled", 00:18:22.635 "thread": "nvmf_tgt_poll_group_000", 00:18:22.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:22.635 "listen_address": { 00:18:22.635 "trtype": "TCP", 00:18:22.635 "adrfam": "IPv4", 00:18:22.635 "traddr": "10.0.0.2", 00:18:22.635 "trsvcid": "4420" 00:18:22.635 }, 00:18:22.635 "peer_address": { 00:18:22.635 "trtype": "TCP", 00:18:22.635 "adrfam": "IPv4", 00:18:22.635 "traddr": "10.0.0.1", 00:18:22.635 "trsvcid": "41458" 00:18:22.635 }, 00:18:22.635 "auth": { 00:18:22.635 "state": "completed", 00:18:22.635 "digest": "sha384", 00:18:22.635 "dhgroup": "ffdhe4096" 00:18:22.635 } 00:18:22.635 } 00:18:22.635 ]' 00:18:22.635 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.635 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.635 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.894 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.894 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.894 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.894 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.894 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.153 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:23.153 13:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:23.517 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.517 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.517 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.517 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.517 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.517 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.517 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.517 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:23.517 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:23.873 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:23.873 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.873 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:23.873 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:23.873 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:23.873 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.873 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.873 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.873 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.873 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.873 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.873 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.873 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.132 00:18:24.132 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.132 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.132 13:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.392 { 00:18:24.392 "cntlid": 81, 00:18:24.392 "qid": 0, 00:18:24.392 "state": "enabled", 00:18:24.392 "thread": "nvmf_tgt_poll_group_000", 00:18:24.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:24.392 "listen_address": { 00:18:24.392 "trtype": "TCP", 00:18:24.392 "adrfam": "IPv4", 00:18:24.392 "traddr": "10.0.0.2", 00:18:24.392 "trsvcid": "4420" 00:18:24.392 }, 00:18:24.392 "peer_address": { 00:18:24.392 "trtype": "TCP", 00:18:24.392 "adrfam": "IPv4", 00:18:24.392 "traddr": "10.0.0.1", 00:18:24.392 "trsvcid": "47732" 00:18:24.392 }, 00:18:24.392 "auth": { 00:18:24.392 "state": "completed", 00:18:24.392 "digest": "sha384", 00:18:24.392 "dhgroup": "ffdhe6144" 00:18:24.392 } 00:18:24.392 } 00:18:24.392 ]' 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.392 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.651 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:24.651 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:25.220 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.220 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:25.220 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.220 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.220 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.220 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.220 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:25.220 13:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:25.490 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:25.490 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.490 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:25.490 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:25.490 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:25.490 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.490 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.490 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.490 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.490 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.490 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.490 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.490 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.750 00:18:25.750 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.750 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.750 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.009 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.009 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.009 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.009 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.009 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.009 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.009 { 00:18:26.009 "cntlid": 83, 00:18:26.009 "qid": 0, 00:18:26.009 "state": "enabled", 00:18:26.009 "thread": "nvmf_tgt_poll_group_000", 00:18:26.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:26.009 "listen_address": { 00:18:26.009 "trtype": "TCP", 00:18:26.009 "adrfam": "IPv4", 00:18:26.009 "traddr": "10.0.0.2", 00:18:26.009 "trsvcid": "4420" 00:18:26.009 }, 00:18:26.009 "peer_address": { 00:18:26.009 "trtype": "TCP", 00:18:26.009 "adrfam": "IPv4", 00:18:26.009 "traddr": "10.0.0.1", 00:18:26.009 "trsvcid": "47752" 00:18:26.009 }, 00:18:26.009 "auth": { 00:18:26.009 "state": "completed", 00:18:26.009 "digest": "sha384", 00:18:26.009 "dhgroup": "ffdhe6144" 00:18:26.009 } 00:18:26.009 } 00:18:26.009 ]' 00:18:26.009 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.009 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.009 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.268 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.268 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.268 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.268 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.268 13:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.527 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:26.527 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:27.096 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.096 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.096 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.096 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.097 13:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.665 00:18:27.665 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.665 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.665 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.665 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.665 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.665 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.665 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.665 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.665 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.665 { 00:18:27.665 "cntlid": 85, 00:18:27.665 "qid": 0, 00:18:27.665 "state": "enabled", 00:18:27.665 "thread": "nvmf_tgt_poll_group_000", 00:18:27.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:27.665 "listen_address": { 00:18:27.665 "trtype": "TCP", 00:18:27.665 "adrfam": "IPv4", 00:18:27.665 "traddr": "10.0.0.2", 00:18:27.665 "trsvcid": "4420" 00:18:27.665 }, 00:18:27.665 "peer_address": { 00:18:27.665 "trtype": "TCP", 00:18:27.665 "adrfam": "IPv4", 00:18:27.665 "traddr": "10.0.0.1", 00:18:27.665 "trsvcid": "47778" 00:18:27.665 }, 00:18:27.665 "auth": { 00:18:27.665 "state": "completed", 00:18:27.665 "digest": "sha384", 00:18:27.665 "dhgroup": "ffdhe6144" 00:18:27.665 } 00:18:27.665 } 00:18:27.665 ]' 00:18:27.665 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.924 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.924 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.924 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:27.924 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.924 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.924 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.924 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.183 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:28.183 13:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:28.751 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.751 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:28.751 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.751 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.751 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.751 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.751 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:28.751 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:29.010 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:29.010 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.010 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:29.010 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:29.010 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:29.010 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.010 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:29.010 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.010 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.010 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.010 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:29.010 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.011 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.269 00:18:29.269 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.269 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.269 13:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.529 { 00:18:29.529 "cntlid": 87, 00:18:29.529 "qid": 0, 00:18:29.529 "state": "enabled", 00:18:29.529 "thread": "nvmf_tgt_poll_group_000", 00:18:29.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:29.529 "listen_address": { 00:18:29.529 "trtype": "TCP", 00:18:29.529 "adrfam": "IPv4", 00:18:29.529 "traddr": "10.0.0.2", 00:18:29.529 "trsvcid": "4420" 00:18:29.529 }, 00:18:29.529 "peer_address": { 00:18:29.529 "trtype": "TCP", 00:18:29.529 "adrfam": "IPv4", 00:18:29.529 "traddr": "10.0.0.1", 00:18:29.529 "trsvcid": "47798" 00:18:29.529 }, 00:18:29.529 "auth": { 00:18:29.529 "state": "completed", 00:18:29.529 "digest": "sha384", 00:18:29.529 "dhgroup": "ffdhe6144" 00:18:29.529 } 00:18:29.529 } 00:18:29.529 ]' 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.529 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.788 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:29.788 13:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:30.357 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.357 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:30.357 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.357 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.357 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.357 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.357 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.357 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:30.357 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:30.616 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:30.616 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.616 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:30.616 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:30.616 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:30.616 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.616 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.616 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.616 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.616 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.616 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.616 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.616 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.184 00:18:31.184 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.184 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.184 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.184 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.184 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.184 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.184 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.184 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.184 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.184 { 00:18:31.184 "cntlid": 89, 00:18:31.184 "qid": 0, 00:18:31.184 "state": "enabled", 00:18:31.184 "thread": "nvmf_tgt_poll_group_000", 00:18:31.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:31.184 "listen_address": { 00:18:31.184 "trtype": "TCP", 00:18:31.184 "adrfam": "IPv4", 00:18:31.184 "traddr": "10.0.0.2", 00:18:31.184 "trsvcid": "4420" 00:18:31.184 }, 00:18:31.184 "peer_address": { 00:18:31.184 "trtype": "TCP", 00:18:31.184 "adrfam": "IPv4", 00:18:31.184 "traddr": "10.0.0.1", 00:18:31.184 "trsvcid": "47828" 00:18:31.184 }, 00:18:31.184 "auth": { 00:18:31.184 "state": "completed", 00:18:31.184 "digest": "sha384", 00:18:31.184 "dhgroup": "ffdhe8192" 00:18:31.184 } 00:18:31.184 } 00:18:31.184 ]' 00:18:31.184 13:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.184 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.443 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.443 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.443 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.443 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.443 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.443 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.702 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:31.702 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:32.269 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.269 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:32.269 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.269 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.269 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.269 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.269 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:32.269 13:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:32.269 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:32.269 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.269 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:32.269 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:32.269 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:32.269 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.270 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.270 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.270 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.270 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.270 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.270 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.270 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.838 00:18:32.838 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.838 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.838 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.097 { 00:18:33.097 "cntlid": 91, 00:18:33.097 "qid": 0, 00:18:33.097 "state": "enabled", 00:18:33.097 "thread": "nvmf_tgt_poll_group_000", 00:18:33.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:33.097 "listen_address": { 00:18:33.097 "trtype": "TCP", 00:18:33.097 "adrfam": "IPv4", 00:18:33.097 "traddr": "10.0.0.2", 00:18:33.097 "trsvcid": "4420" 00:18:33.097 }, 00:18:33.097 "peer_address": { 00:18:33.097 "trtype": "TCP", 00:18:33.097 "adrfam": "IPv4", 00:18:33.097 "traddr": "10.0.0.1", 00:18:33.097 "trsvcid": "47860" 00:18:33.097 }, 00:18:33.097 "auth": { 00:18:33.097 "state": "completed", 00:18:33.097 "digest": "sha384", 00:18:33.097 "dhgroup": "ffdhe8192" 00:18:33.097 } 00:18:33.097 } 00:18:33.097 ]' 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.097 13:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.356 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:33.356 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:33.924 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.924 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:33.924 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.924 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.924 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.924 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.924 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:33.924 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.183 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:34.183 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.183 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:34.183 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:34.183 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:34.183 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.183 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.183 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.183 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.183 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.183 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.183 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.183 13:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.751 00:18:34.751 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.751 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.751 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.010 { 00:18:35.010 "cntlid": 93, 00:18:35.010 "qid": 0, 00:18:35.010 "state": "enabled", 00:18:35.010 "thread": "nvmf_tgt_poll_group_000", 00:18:35.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:35.010 "listen_address": { 00:18:35.010 "trtype": "TCP", 00:18:35.010 "adrfam": "IPv4", 00:18:35.010 "traddr": "10.0.0.2", 00:18:35.010 "trsvcid": "4420" 00:18:35.010 }, 00:18:35.010 "peer_address": { 00:18:35.010 "trtype": "TCP", 00:18:35.010 "adrfam": "IPv4", 00:18:35.010 "traddr": "10.0.0.1", 00:18:35.010 "trsvcid": "52504" 00:18:35.010 }, 00:18:35.010 "auth": { 00:18:35.010 "state": "completed", 00:18:35.010 "digest": "sha384", 00:18:35.010 "dhgroup": "ffdhe8192" 00:18:35.010 } 00:18:35.010 } 00:18:35.010 ]' 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.010 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.270 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:35.270 13:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:35.837 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.837 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.837 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.837 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.837 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.837 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.837 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:35.837 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:36.096 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:36.096 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.096 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.096 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:36.096 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:36.096 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.096 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:36.096 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.096 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.096 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.096 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.096 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.096 13:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.356 00:18:36.617 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.617 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.617 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.617 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.617 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.617 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.617 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.617 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.617 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.617 { 00:18:36.617 "cntlid": 95, 00:18:36.617 "qid": 0, 00:18:36.617 "state": "enabled", 00:18:36.617 "thread": "nvmf_tgt_poll_group_000", 00:18:36.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:36.617 "listen_address": { 00:18:36.617 "trtype": "TCP", 00:18:36.617 "adrfam": "IPv4", 00:18:36.617 "traddr": "10.0.0.2", 00:18:36.617 "trsvcid": "4420" 00:18:36.617 }, 00:18:36.617 "peer_address": { 00:18:36.617 "trtype": "TCP", 00:18:36.617 "adrfam": "IPv4", 00:18:36.617 "traddr": "10.0.0.1", 00:18:36.617 "trsvcid": "52512" 00:18:36.617 }, 00:18:36.617 "auth": { 00:18:36.617 "state": "completed", 00:18:36.617 "digest": "sha384", 00:18:36.617 "dhgroup": "ffdhe8192" 00:18:36.617 } 00:18:36.617 } 00:18:36.617 ]' 00:18:36.617 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.876 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.876 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.876 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.876 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.876 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.876 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.876 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.876 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:36.876 13:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:37.445 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.445 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:37.445 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.445 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.445 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.445 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:37.445 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.445 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.445 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:37.445 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:37.705 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:37.705 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.705 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.705 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:37.705 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:37.705 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.705 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.705 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.705 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.705 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.705 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.705 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.705 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.964 00:18:37.964 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.964 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.964 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.223 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.223 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.223 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.223 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.223 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.223 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.223 { 00:18:38.223 "cntlid": 97, 00:18:38.223 "qid": 0, 00:18:38.223 "state": "enabled", 00:18:38.223 "thread": "nvmf_tgt_poll_group_000", 00:18:38.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:38.223 "listen_address": { 00:18:38.223 "trtype": "TCP", 00:18:38.223 "adrfam": "IPv4", 00:18:38.223 "traddr": "10.0.0.2", 00:18:38.224 "trsvcid": "4420" 00:18:38.224 }, 00:18:38.224 "peer_address": { 00:18:38.224 "trtype": "TCP", 00:18:38.224 "adrfam": "IPv4", 00:18:38.224 "traddr": "10.0.0.1", 00:18:38.224 "trsvcid": "52538" 00:18:38.224 }, 00:18:38.224 "auth": { 00:18:38.224 "state": "completed", 00:18:38.224 "digest": "sha512", 00:18:38.224 "dhgroup": "null" 00:18:38.224 } 00:18:38.224 } 00:18:38.224 ]' 00:18:38.224 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.224 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.224 13:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.224 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:38.224 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.483 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.483 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.483 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.483 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:38.483 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:39.051 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.051 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.051 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.051 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.051 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.051 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.052 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:39.052 13:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:39.311 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:39.311 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.311 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.311 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:39.311 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:39.311 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.311 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.311 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.311 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.311 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.311 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.311 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.311 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.570 00:18:39.570 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.570 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.570 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.830 { 00:18:39.830 "cntlid": 99, 00:18:39.830 "qid": 0, 00:18:39.830 "state": "enabled", 00:18:39.830 "thread": "nvmf_tgt_poll_group_000", 00:18:39.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:39.830 "listen_address": { 00:18:39.830 "trtype": "TCP", 00:18:39.830 "adrfam": "IPv4", 00:18:39.830 "traddr": "10.0.0.2", 00:18:39.830 "trsvcid": "4420" 00:18:39.830 }, 00:18:39.830 "peer_address": { 00:18:39.830 "trtype": "TCP", 00:18:39.830 "adrfam": "IPv4", 00:18:39.830 "traddr": "10.0.0.1", 00:18:39.830 "trsvcid": "52552" 00:18:39.830 }, 00:18:39.830 "auth": { 00:18:39.830 "state": "completed", 00:18:39.830 "digest": "sha512", 00:18:39.830 "dhgroup": "null" 00:18:39.830 } 00:18:39.830 } 00:18:39.830 ]' 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.830 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.089 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:40.089 13:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:40.656 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.656 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:40.656 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.656 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.656 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.656 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.656 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:40.656 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:40.916 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:40.916 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.916 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:40.916 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:40.916 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:40.916 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.916 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.916 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.916 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.916 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.916 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.916 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.916 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.175 00:18:41.175 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.175 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.175 13:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.434 { 00:18:41.434 "cntlid": 101, 00:18:41.434 "qid": 0, 00:18:41.434 "state": "enabled", 00:18:41.434 "thread": "nvmf_tgt_poll_group_000", 00:18:41.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:41.434 "listen_address": { 00:18:41.434 "trtype": "TCP", 00:18:41.434 "adrfam": "IPv4", 00:18:41.434 "traddr": "10.0.0.2", 00:18:41.434 "trsvcid": "4420" 00:18:41.434 }, 00:18:41.434 "peer_address": { 00:18:41.434 "trtype": "TCP", 00:18:41.434 "adrfam": "IPv4", 00:18:41.434 "traddr": "10.0.0.1", 00:18:41.434 "trsvcid": "52590" 00:18:41.434 }, 00:18:41.434 "auth": { 00:18:41.434 "state": "completed", 00:18:41.434 "digest": "sha512", 00:18:41.434 "dhgroup": "null" 00:18:41.434 } 00:18:41.434 } 00:18:41.434 ]' 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.434 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.693 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:41.693 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:42.262 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.262 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:42.262 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.262 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.262 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.262 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.262 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:42.262 13:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:42.522 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:42.522 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.522 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.522 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:42.522 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:42.522 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.522 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:42.522 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.522 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.522 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.522 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:42.522 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.522 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.781 00:18:42.781 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.781 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.781 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.781 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.781 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.781 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.781 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.040 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.040 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.040 { 00:18:43.040 "cntlid": 103, 00:18:43.040 "qid": 0, 00:18:43.040 "state": "enabled", 00:18:43.040 "thread": "nvmf_tgt_poll_group_000", 00:18:43.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:43.040 "listen_address": { 00:18:43.040 "trtype": "TCP", 00:18:43.040 "adrfam": "IPv4", 00:18:43.040 "traddr": "10.0.0.2", 00:18:43.040 "trsvcid": "4420" 00:18:43.040 }, 00:18:43.040 "peer_address": { 00:18:43.040 "trtype": "TCP", 00:18:43.040 "adrfam": "IPv4", 00:18:43.040 "traddr": "10.0.0.1", 00:18:43.040 "trsvcid": "52616" 00:18:43.040 }, 00:18:43.040 "auth": { 00:18:43.040 "state": "completed", 00:18:43.040 "digest": "sha512", 00:18:43.040 "dhgroup": "null" 00:18:43.040 } 00:18:43.040 } 00:18:43.040 ]' 00:18:43.040 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.040 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.040 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.040 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:43.040 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.040 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.040 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.040 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.301 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:43.301 13:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:43.869 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.869 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:43.869 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.869 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.869 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.869 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.869 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.869 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:43.869 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:44.128 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:44.128 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.128 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.128 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:44.128 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:44.128 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.128 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.128 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.128 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.128 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.128 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.128 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.128 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.388 00:18:44.388 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.388 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.388 13:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.388 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.388 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.388 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.388 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.388 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.388 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.388 { 00:18:44.388 "cntlid": 105, 00:18:44.388 "qid": 0, 00:18:44.388 "state": "enabled", 00:18:44.388 "thread": "nvmf_tgt_poll_group_000", 00:18:44.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:44.388 "listen_address": { 00:18:44.388 "trtype": "TCP", 00:18:44.388 "adrfam": "IPv4", 00:18:44.388 "traddr": "10.0.0.2", 00:18:44.388 "trsvcid": "4420" 00:18:44.388 }, 00:18:44.388 "peer_address": { 00:18:44.388 "trtype": "TCP", 00:18:44.388 "adrfam": "IPv4", 00:18:44.388 "traddr": "10.0.0.1", 00:18:44.388 "trsvcid": "48750" 00:18:44.388 }, 00:18:44.388 "auth": { 00:18:44.388 "state": "completed", 00:18:44.388 "digest": "sha512", 00:18:44.388 "dhgroup": "ffdhe2048" 00:18:44.388 } 00:18:44.388 } 00:18:44.388 ]' 00:18:44.388 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.647 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.647 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.647 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:44.647 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.647 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.647 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.647 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.906 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:44.906 13:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.473 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.731 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.731 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.731 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.731 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.731 00:18:45.990 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.990 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.990 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.990 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.990 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.990 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.990 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.990 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.990 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.990 { 00:18:45.990 "cntlid": 107, 00:18:45.990 "qid": 0, 00:18:45.990 "state": "enabled", 00:18:45.990 "thread": "nvmf_tgt_poll_group_000", 00:18:45.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:45.990 "listen_address": { 00:18:45.990 "trtype": "TCP", 00:18:45.990 "adrfam": "IPv4", 00:18:45.990 "traddr": "10.0.0.2", 00:18:45.990 "trsvcid": "4420" 00:18:45.990 }, 00:18:45.990 "peer_address": { 00:18:45.990 "trtype": "TCP", 00:18:45.990 "adrfam": "IPv4", 00:18:45.990 "traddr": "10.0.0.1", 00:18:45.990 "trsvcid": "48786" 00:18:45.990 }, 00:18:45.990 "auth": { 00:18:45.990 "state": "completed", 00:18:45.990 "digest": "sha512", 00:18:45.990 "dhgroup": "ffdhe2048" 00:18:45.990 } 00:18:45.990 } 00:18:45.990 ]' 00:18:45.990 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.249 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.249 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.249 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:46.249 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.249 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.249 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.249 13:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.508 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:46.508 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.077 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.336 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.336 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.336 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.336 13:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.336 00:18:47.595 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.595 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.595 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.595 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.595 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.595 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.595 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.595 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.595 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.595 { 00:18:47.595 "cntlid": 109, 00:18:47.595 "qid": 0, 00:18:47.595 "state": "enabled", 00:18:47.595 "thread": "nvmf_tgt_poll_group_000", 00:18:47.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:47.595 "listen_address": { 00:18:47.595 "trtype": "TCP", 00:18:47.595 "adrfam": "IPv4", 00:18:47.595 "traddr": "10.0.0.2", 00:18:47.595 "trsvcid": "4420" 00:18:47.595 }, 00:18:47.595 "peer_address": { 00:18:47.595 "trtype": "TCP", 00:18:47.595 "adrfam": "IPv4", 00:18:47.595 "traddr": "10.0.0.1", 00:18:47.595 "trsvcid": "48818" 00:18:47.595 }, 00:18:47.595 "auth": { 00:18:47.595 "state": "completed", 00:18:47.595 "digest": "sha512", 00:18:47.595 "dhgroup": "ffdhe2048" 00:18:47.595 } 00:18:47.595 } 00:18:47.595 ]' 00:18:47.595 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.595 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.595 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.854 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.854 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.854 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.854 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.854 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.114 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:48.114 13:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.681 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.939 00:18:48.939 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.939 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.939 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.198 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.198 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.198 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.198 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.198 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.198 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.198 { 00:18:49.198 "cntlid": 111, 00:18:49.198 "qid": 0, 00:18:49.198 "state": "enabled", 00:18:49.198 "thread": "nvmf_tgt_poll_group_000", 00:18:49.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:49.198 "listen_address": { 00:18:49.198 "trtype": "TCP", 00:18:49.198 "adrfam": "IPv4", 00:18:49.198 "traddr": "10.0.0.2", 00:18:49.198 "trsvcid": "4420" 00:18:49.198 }, 00:18:49.198 "peer_address": { 00:18:49.198 "trtype": "TCP", 00:18:49.198 "adrfam": "IPv4", 00:18:49.198 "traddr": "10.0.0.1", 00:18:49.198 "trsvcid": "48854" 00:18:49.198 }, 00:18:49.198 "auth": { 00:18:49.198 "state": "completed", 00:18:49.198 "digest": "sha512", 00:18:49.198 "dhgroup": "ffdhe2048" 00:18:49.198 } 00:18:49.198 } 00:18:49.198 ]' 00:18:49.198 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.198 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.198 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.198 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:49.198 13:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.458 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.458 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.458 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.458 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:49.458 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:50.026 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.026 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:50.026 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.026 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.026 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.026 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.026 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.026 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:50.026 13:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:50.285 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:50.285 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.285 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.285 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:50.285 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:50.285 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.285 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.285 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.285 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.285 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.285 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.285 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.285 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.544 00:18:50.544 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.544 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.544 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.802 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.802 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.802 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.802 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.802 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.802 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.802 { 00:18:50.802 "cntlid": 113, 00:18:50.802 "qid": 0, 00:18:50.802 "state": "enabled", 00:18:50.802 "thread": "nvmf_tgt_poll_group_000", 00:18:50.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:50.802 "listen_address": { 00:18:50.802 "trtype": "TCP", 00:18:50.802 "adrfam": "IPv4", 00:18:50.802 "traddr": "10.0.0.2", 00:18:50.802 "trsvcid": "4420" 00:18:50.802 }, 00:18:50.802 "peer_address": { 00:18:50.802 "trtype": "TCP", 00:18:50.802 "adrfam": "IPv4", 00:18:50.802 "traddr": "10.0.0.1", 00:18:50.802 "trsvcid": "48868" 00:18:50.802 }, 00:18:50.802 "auth": { 00:18:50.802 "state": "completed", 00:18:50.802 "digest": "sha512", 00:18:50.802 "dhgroup": "ffdhe3072" 00:18:50.802 } 00:18:50.802 } 00:18:50.802 ]' 00:18:50.802 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.802 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.802 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.802 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:50.802 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.061 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.061 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.061 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.061 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:51.061 13:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:51.628 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.628 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:51.628 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.628 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.628 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.628 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.628 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:51.628 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:51.887 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:51.887 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.887 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:51.887 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:51.887 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:51.887 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.887 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.887 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.887 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.887 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.887 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.887 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.887 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.191 00:18:52.191 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.192 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.192 13:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.450 { 00:18:52.450 "cntlid": 115, 00:18:52.450 "qid": 0, 00:18:52.450 "state": "enabled", 00:18:52.450 "thread": "nvmf_tgt_poll_group_000", 00:18:52.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:52.450 "listen_address": { 00:18:52.450 "trtype": "TCP", 00:18:52.450 "adrfam": "IPv4", 00:18:52.450 "traddr": "10.0.0.2", 00:18:52.450 "trsvcid": "4420" 00:18:52.450 }, 00:18:52.450 "peer_address": { 00:18:52.450 "trtype": "TCP", 00:18:52.450 "adrfam": "IPv4", 00:18:52.450 "traddr": "10.0.0.1", 00:18:52.450 "trsvcid": "48902" 00:18:52.450 }, 00:18:52.450 "auth": { 00:18:52.450 "state": "completed", 00:18:52.450 "digest": "sha512", 00:18:52.450 "dhgroup": "ffdhe3072" 00:18:52.450 } 00:18:52.450 } 00:18:52.450 ]' 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.450 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.708 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:52.708 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:53.274 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.274 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:53.274 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.274 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.274 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.274 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.274 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.274 13:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.532 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:53.532 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.532 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.532 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:53.532 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:53.532 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.532 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.532 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.532 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.532 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.532 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.532 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.532 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.790 00:18:53.790 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.790 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.790 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.049 { 00:18:54.049 "cntlid": 117, 00:18:54.049 "qid": 0, 00:18:54.049 "state": "enabled", 00:18:54.049 "thread": "nvmf_tgt_poll_group_000", 00:18:54.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:54.049 "listen_address": { 00:18:54.049 "trtype": "TCP", 00:18:54.049 "adrfam": "IPv4", 00:18:54.049 "traddr": "10.0.0.2", 00:18:54.049 "trsvcid": "4420" 00:18:54.049 }, 00:18:54.049 "peer_address": { 00:18:54.049 "trtype": "TCP", 00:18:54.049 "adrfam": "IPv4", 00:18:54.049 "traddr": "10.0.0.1", 00:18:54.049 "trsvcid": "48938" 00:18:54.049 }, 00:18:54.049 "auth": { 00:18:54.049 "state": "completed", 00:18:54.049 "digest": "sha512", 00:18:54.049 "dhgroup": "ffdhe3072" 00:18:54.049 } 00:18:54.049 } 00:18:54.049 ]' 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.049 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.309 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:54.309 13:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:18:54.877 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.877 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:54.877 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.877 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.877 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.877 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.877 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:54.877 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:55.136 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:55.136 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.136 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.136 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:55.136 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:55.136 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.136 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:55.136 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.136 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.136 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.136 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:55.136 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.136 13:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.396 00:18:55.396 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.396 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.396 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.655 { 00:18:55.655 "cntlid": 119, 00:18:55.655 "qid": 0, 00:18:55.655 "state": "enabled", 00:18:55.655 "thread": "nvmf_tgt_poll_group_000", 00:18:55.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:55.655 "listen_address": { 00:18:55.655 "trtype": "TCP", 00:18:55.655 "adrfam": "IPv4", 00:18:55.655 "traddr": "10.0.0.2", 00:18:55.655 "trsvcid": "4420" 00:18:55.655 }, 00:18:55.655 "peer_address": { 00:18:55.655 "trtype": "TCP", 00:18:55.655 "adrfam": "IPv4", 00:18:55.655 "traddr": "10.0.0.1", 00:18:55.655 "trsvcid": "55640" 00:18:55.655 }, 00:18:55.655 "auth": { 00:18:55.655 "state": "completed", 00:18:55.655 "digest": "sha512", 00:18:55.655 "dhgroup": "ffdhe3072" 00:18:55.655 } 00:18:55.655 } 00:18:55.655 ]' 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.655 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.913 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:55.913 13:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:18:56.480 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.480 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:56.480 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.480 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.480 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.480 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.480 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.480 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:56.480 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:56.739 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:56.739 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.739 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:56.739 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:56.739 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:56.739 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.739 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.739 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.739 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.739 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.739 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.739 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.739 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.999 00:18:56.999 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.999 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.999 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.999 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.999 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.999 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.999 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.999 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.999 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.999 { 00:18:56.999 "cntlid": 121, 00:18:56.999 "qid": 0, 00:18:56.999 "state": "enabled", 00:18:56.999 "thread": "nvmf_tgt_poll_group_000", 00:18:56.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:56.999 "listen_address": { 00:18:56.999 "trtype": "TCP", 00:18:56.999 "adrfam": "IPv4", 00:18:56.999 "traddr": "10.0.0.2", 00:18:56.999 "trsvcid": "4420" 00:18:56.999 }, 00:18:56.999 "peer_address": { 00:18:56.999 "trtype": "TCP", 00:18:56.999 "adrfam": "IPv4", 00:18:56.999 "traddr": "10.0.0.1", 00:18:56.999 "trsvcid": "55664" 00:18:56.999 }, 00:18:56.999 "auth": { 00:18:56.999 "state": "completed", 00:18:56.999 "digest": "sha512", 00:18:56.999 "dhgroup": "ffdhe4096" 00:18:56.999 } 00:18:56.999 } 00:18:56.999 ]' 00:18:56.999 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.258 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.258 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.258 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:57.258 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.258 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.258 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.258 13:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.518 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:57.518 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.086 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.346 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.346 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.346 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.346 13:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.606 00:18:58.606 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.606 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.606 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.606 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.606 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.606 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.606 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.606 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.606 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.606 { 00:18:58.606 "cntlid": 123, 00:18:58.606 "qid": 0, 00:18:58.606 "state": "enabled", 00:18:58.606 "thread": "nvmf_tgt_poll_group_000", 00:18:58.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:58.606 "listen_address": { 00:18:58.606 "trtype": "TCP", 00:18:58.606 "adrfam": "IPv4", 00:18:58.606 "traddr": "10.0.0.2", 00:18:58.606 "trsvcid": "4420" 00:18:58.606 }, 00:18:58.606 "peer_address": { 00:18:58.606 "trtype": "TCP", 00:18:58.606 "adrfam": "IPv4", 00:18:58.606 "traddr": "10.0.0.1", 00:18:58.606 "trsvcid": "55702" 00:18:58.606 }, 00:18:58.606 "auth": { 00:18:58.606 "state": "completed", 00:18:58.606 "digest": "sha512", 00:18:58.606 "dhgroup": "ffdhe4096" 00:18:58.606 } 00:18:58.606 } 00:18:58.606 ]' 00:18:58.606 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.865 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.865 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.865 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:58.865 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.865 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.865 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.865 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.124 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:59.124 13:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:18:59.692 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.692 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:59.692 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.692 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.692 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.692 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.692 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:59.692 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:59.951 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:59.951 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.951 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:59.951 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:59.951 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:59.951 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.951 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.951 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.951 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.951 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.951 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.951 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.951 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.211 00:19:00.211 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.211 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.211 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.211 13:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.211 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.211 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.211 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.211 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.211 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.211 { 00:19:00.211 "cntlid": 125, 00:19:00.211 "qid": 0, 00:19:00.211 "state": "enabled", 00:19:00.211 "thread": "nvmf_tgt_poll_group_000", 00:19:00.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:00.211 "listen_address": { 00:19:00.211 "trtype": "TCP", 00:19:00.211 "adrfam": "IPv4", 00:19:00.211 "traddr": "10.0.0.2", 00:19:00.211 "trsvcid": "4420" 00:19:00.211 }, 00:19:00.211 "peer_address": { 00:19:00.211 "trtype": "TCP", 00:19:00.211 "adrfam": "IPv4", 00:19:00.211 "traddr": "10.0.0.1", 00:19:00.211 "trsvcid": "55736" 00:19:00.211 }, 00:19:00.211 "auth": { 00:19:00.211 "state": "completed", 00:19:00.211 "digest": "sha512", 00:19:00.211 "dhgroup": "ffdhe4096" 00:19:00.211 } 00:19:00.211 } 00:19:00.211 ]' 00:19:00.211 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.470 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.470 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.470 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:00.470 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.470 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.470 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.470 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.728 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:19:00.728 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:19:01.296 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.296 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:01.296 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.296 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.296 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.296 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.296 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.296 13:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.296 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:01.296 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.296 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:01.296 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:01.296 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:01.296 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.296 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:01.296 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.296 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.296 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.297 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:01.297 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.297 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.556 00:19:01.815 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.815 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.815 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.815 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.815 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.815 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.815 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.815 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.815 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.815 { 00:19:01.815 "cntlid": 127, 00:19:01.815 "qid": 0, 00:19:01.815 "state": "enabled", 00:19:01.815 "thread": "nvmf_tgt_poll_group_000", 00:19:01.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:01.815 "listen_address": { 00:19:01.815 "trtype": "TCP", 00:19:01.815 "adrfam": "IPv4", 00:19:01.815 "traddr": "10.0.0.2", 00:19:01.815 "trsvcid": "4420" 00:19:01.815 }, 00:19:01.815 "peer_address": { 00:19:01.815 "trtype": "TCP", 00:19:01.815 "adrfam": "IPv4", 00:19:01.815 "traddr": "10.0.0.1", 00:19:01.815 "trsvcid": "55768" 00:19:01.815 }, 00:19:01.815 "auth": { 00:19:01.815 "state": "completed", 00:19:01.815 "digest": "sha512", 00:19:01.815 "dhgroup": "ffdhe4096" 00:19:01.815 } 00:19:01.815 } 00:19:01.815 ]' 00:19:01.815 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.074 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.074 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.074 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.074 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.074 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.074 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.074 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.333 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:19:02.333 13:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.930 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.931 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.931 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.931 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.931 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.931 13:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.498 00:19:03.498 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.498 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.498 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.498 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.498 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.498 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.498 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.498 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.757 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.757 { 00:19:03.757 "cntlid": 129, 00:19:03.757 "qid": 0, 00:19:03.757 "state": "enabled", 00:19:03.757 "thread": "nvmf_tgt_poll_group_000", 00:19:03.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:03.757 "listen_address": { 00:19:03.757 "trtype": "TCP", 00:19:03.757 "adrfam": "IPv4", 00:19:03.757 "traddr": "10.0.0.2", 00:19:03.757 "trsvcid": "4420" 00:19:03.757 }, 00:19:03.757 "peer_address": { 00:19:03.757 "trtype": "TCP", 00:19:03.757 "adrfam": "IPv4", 00:19:03.757 "traddr": "10.0.0.1", 00:19:03.757 "trsvcid": "55788" 00:19:03.757 }, 00:19:03.757 "auth": { 00:19:03.757 "state": "completed", 00:19:03.757 "digest": "sha512", 00:19:03.757 "dhgroup": "ffdhe6144" 00:19:03.757 } 00:19:03.757 } 00:19:03.757 ]' 00:19:03.757 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.757 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.757 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.757 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:03.757 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.757 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.758 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.758 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.017 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:19:04.017 13:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:19:04.585 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.585 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:04.585 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.585 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.585 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.585 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.585 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:04.585 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:04.844 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:04.844 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.844 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:04.844 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:04.844 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:04.844 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.844 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.844 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.844 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.844 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.844 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.844 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.844 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.103 00:19:05.103 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.103 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.103 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.362 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.362 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.362 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.362 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.362 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.362 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.362 { 00:19:05.362 "cntlid": 131, 00:19:05.362 "qid": 0, 00:19:05.362 "state": "enabled", 00:19:05.362 "thread": "nvmf_tgt_poll_group_000", 00:19:05.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:05.362 "listen_address": { 00:19:05.362 "trtype": "TCP", 00:19:05.362 "adrfam": "IPv4", 00:19:05.362 "traddr": "10.0.0.2", 00:19:05.362 "trsvcid": "4420" 00:19:05.362 }, 00:19:05.362 "peer_address": { 00:19:05.362 "trtype": "TCP", 00:19:05.362 "adrfam": "IPv4", 00:19:05.362 "traddr": "10.0.0.1", 00:19:05.362 "trsvcid": "44266" 00:19:05.362 }, 00:19:05.362 "auth": { 00:19:05.363 "state": "completed", 00:19:05.363 "digest": "sha512", 00:19:05.363 "dhgroup": "ffdhe6144" 00:19:05.363 } 00:19:05.363 } 00:19:05.363 ]' 00:19:05.363 13:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.363 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.363 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.363 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:05.363 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.363 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.363 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.363 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.621 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:19:05.621 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:19:06.188 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.188 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:06.188 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.188 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.188 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.188 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.188 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:06.188 13:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:06.448 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:06.448 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.448 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:06.448 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:06.448 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:06.448 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.448 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.448 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.448 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.448 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.448 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.448 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.448 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.707 00:19:06.707 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.707 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.707 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.966 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.966 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.966 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.966 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.966 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.966 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.966 { 00:19:06.966 "cntlid": 133, 00:19:06.966 "qid": 0, 00:19:06.966 "state": "enabled", 00:19:06.966 "thread": "nvmf_tgt_poll_group_000", 00:19:06.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:06.966 "listen_address": { 00:19:06.966 "trtype": "TCP", 00:19:06.966 "adrfam": "IPv4", 00:19:06.966 "traddr": "10.0.0.2", 00:19:06.966 "trsvcid": "4420" 00:19:06.966 }, 00:19:06.966 "peer_address": { 00:19:06.966 "trtype": "TCP", 00:19:06.966 "adrfam": "IPv4", 00:19:06.967 "traddr": "10.0.0.1", 00:19:06.967 "trsvcid": "44292" 00:19:06.967 }, 00:19:06.967 "auth": { 00:19:06.967 "state": "completed", 00:19:06.967 "digest": "sha512", 00:19:06.967 "dhgroup": "ffdhe6144" 00:19:06.967 } 00:19:06.967 } 00:19:06.967 ]' 00:19:06.967 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.967 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.967 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.967 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:06.967 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.967 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.967 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.967 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.226 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:19:07.226 13:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:19:07.795 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.795 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:07.795 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.795 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.795 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.795 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.795 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:07.795 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:08.055 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:08.055 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.055 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.055 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:08.055 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:08.055 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.055 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:08.055 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.055 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.055 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.055 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:08.055 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.055 13:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:08.314 00:19:08.573 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.573 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.573 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.573 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.574 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.574 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.574 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.574 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.574 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.574 { 00:19:08.574 "cntlid": 135, 00:19:08.574 "qid": 0, 00:19:08.574 "state": "enabled", 00:19:08.574 "thread": "nvmf_tgt_poll_group_000", 00:19:08.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:08.574 "listen_address": { 00:19:08.574 "trtype": "TCP", 00:19:08.574 "adrfam": "IPv4", 00:19:08.574 "traddr": "10.0.0.2", 00:19:08.574 "trsvcid": "4420" 00:19:08.574 }, 00:19:08.574 "peer_address": { 00:19:08.574 "trtype": "TCP", 00:19:08.574 "adrfam": "IPv4", 00:19:08.574 "traddr": "10.0.0.1", 00:19:08.574 "trsvcid": "44316" 00:19:08.574 }, 00:19:08.574 "auth": { 00:19:08.574 "state": "completed", 00:19:08.574 "digest": "sha512", 00:19:08.574 "dhgroup": "ffdhe6144" 00:19:08.574 } 00:19:08.574 } 00:19:08.574 ]' 00:19:08.574 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.833 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.833 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.833 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:08.833 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.833 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.833 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.833 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.092 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:19:09.092 13:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.661 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.921 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.921 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.921 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.921 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.180 00:19:10.180 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.180 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.180 13:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.438 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.438 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.439 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.439 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.439 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.439 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.439 { 00:19:10.439 "cntlid": 137, 00:19:10.439 "qid": 0, 00:19:10.439 "state": "enabled", 00:19:10.439 "thread": "nvmf_tgt_poll_group_000", 00:19:10.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:10.439 "listen_address": { 00:19:10.439 "trtype": "TCP", 00:19:10.439 "adrfam": "IPv4", 00:19:10.439 "traddr": "10.0.0.2", 00:19:10.439 "trsvcid": "4420" 00:19:10.439 }, 00:19:10.439 "peer_address": { 00:19:10.439 "trtype": "TCP", 00:19:10.439 "adrfam": "IPv4", 00:19:10.439 "traddr": "10.0.0.1", 00:19:10.439 "trsvcid": "44342" 00:19:10.439 }, 00:19:10.439 "auth": { 00:19:10.439 "state": "completed", 00:19:10.439 "digest": "sha512", 00:19:10.439 "dhgroup": "ffdhe8192" 00:19:10.439 } 00:19:10.439 } 00:19:10.439 ]' 00:19:10.439 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.439 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.439 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.698 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.698 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.698 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.698 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.698 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.957 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:19:10.957 13:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:19:11.525 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.525 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:11.525 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.525 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.525 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.525 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.526 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.114 00:19:12.114 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.114 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.114 13:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.373 { 00:19:12.373 "cntlid": 139, 00:19:12.373 "qid": 0, 00:19:12.373 "state": "enabled", 00:19:12.373 "thread": "nvmf_tgt_poll_group_000", 00:19:12.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:12.373 "listen_address": { 00:19:12.373 "trtype": "TCP", 00:19:12.373 "adrfam": "IPv4", 00:19:12.373 "traddr": "10.0.0.2", 00:19:12.373 "trsvcid": "4420" 00:19:12.373 }, 00:19:12.373 "peer_address": { 00:19:12.373 "trtype": "TCP", 00:19:12.373 "adrfam": "IPv4", 00:19:12.373 "traddr": "10.0.0.1", 00:19:12.373 "trsvcid": "44358" 00:19:12.373 }, 00:19:12.373 "auth": { 00:19:12.373 "state": "completed", 00:19:12.373 "digest": "sha512", 00:19:12.373 "dhgroup": "ffdhe8192" 00:19:12.373 } 00:19:12.373 } 00:19:12.373 ]' 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.373 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.632 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:19:12.632 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: --dhchap-ctrl-secret DHHC-1:02:ZjNjNTlmZDIwMTk0ZmFjYjZkMTk5NjQxMmNmMDk0YzMzOTZiOTU0NzJiZmFlMDYwduvFsw==: 00:19:13.201 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.201 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:13.201 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.201 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.201 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.201 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.201 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:13.201 13:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:13.460 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:13.460 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.460 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.460 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:13.460 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:13.460 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.460 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.460 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.460 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.460 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.460 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.460 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.460 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.026 00:19:14.026 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.026 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.026 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.026 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.026 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.026 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.026 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.285 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.285 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.285 { 00:19:14.285 "cntlid": 141, 00:19:14.285 "qid": 0, 00:19:14.285 "state": "enabled", 00:19:14.285 "thread": "nvmf_tgt_poll_group_000", 00:19:14.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:14.285 "listen_address": { 00:19:14.285 "trtype": "TCP", 00:19:14.285 "adrfam": "IPv4", 00:19:14.285 "traddr": "10.0.0.2", 00:19:14.285 "trsvcid": "4420" 00:19:14.285 }, 00:19:14.285 "peer_address": { 00:19:14.285 "trtype": "TCP", 00:19:14.285 "adrfam": "IPv4", 00:19:14.285 "traddr": "10.0.0.1", 00:19:14.285 "trsvcid": "44384" 00:19:14.285 }, 00:19:14.285 "auth": { 00:19:14.285 "state": "completed", 00:19:14.285 "digest": "sha512", 00:19:14.285 "dhgroup": "ffdhe8192" 00:19:14.285 } 00:19:14.285 } 00:19:14.285 ]' 00:19:14.285 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.285 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.285 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.285 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.285 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.285 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.285 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.285 13:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.544 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:19:14.544 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:01:MGI2YjkxNDdlMDFiNWY3OGJjYzY5MzlhMjNjMDAyYTA2e3SY: 00:19:15.111 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.111 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:15.111 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.111 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.111 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.111 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.111 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.111 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.370 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:15.370 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.370 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:15.370 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:15.370 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:15.370 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.370 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:15.370 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.370 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.370 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.370 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:15.370 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.370 13:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.628 00:19:15.886 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.886 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.886 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.886 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.886 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.886 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.886 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.886 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.886 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.886 { 00:19:15.886 "cntlid": 143, 00:19:15.886 "qid": 0, 00:19:15.886 "state": "enabled", 00:19:15.886 "thread": "nvmf_tgt_poll_group_000", 00:19:15.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:15.886 "listen_address": { 00:19:15.886 "trtype": "TCP", 00:19:15.886 "adrfam": "IPv4", 00:19:15.886 "traddr": "10.0.0.2", 00:19:15.886 "trsvcid": "4420" 00:19:15.886 }, 00:19:15.886 "peer_address": { 00:19:15.886 "trtype": "TCP", 00:19:15.886 "adrfam": "IPv4", 00:19:15.886 "traddr": "10.0.0.1", 00:19:15.886 "trsvcid": "37792" 00:19:15.886 }, 00:19:15.886 "auth": { 00:19:15.886 "state": "completed", 00:19:15.886 "digest": "sha512", 00:19:15.886 "dhgroup": "ffdhe8192" 00:19:15.886 } 00:19:15.886 } 00:19:15.886 ]' 00:19:15.886 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.886 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.886 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.145 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.145 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.145 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.145 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.145 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.404 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:19:16.404 13:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.972 13:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.540 00:19:17.540 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.540 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.540 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.798 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.799 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.799 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.799 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.799 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.799 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.799 { 00:19:17.799 "cntlid": 145, 00:19:17.799 "qid": 0, 00:19:17.799 "state": "enabled", 00:19:17.799 "thread": "nvmf_tgt_poll_group_000", 00:19:17.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:17.799 "listen_address": { 00:19:17.799 "trtype": "TCP", 00:19:17.799 "adrfam": "IPv4", 00:19:17.799 "traddr": "10.0.0.2", 00:19:17.799 "trsvcid": "4420" 00:19:17.799 }, 00:19:17.799 "peer_address": { 00:19:17.799 "trtype": "TCP", 00:19:17.799 "adrfam": "IPv4", 00:19:17.799 "traddr": "10.0.0.1", 00:19:17.799 "trsvcid": "37826" 00:19:17.799 }, 00:19:17.799 "auth": { 00:19:17.799 "state": "completed", 00:19:17.799 "digest": "sha512", 00:19:17.799 "dhgroup": "ffdhe8192" 00:19:17.799 } 00:19:17.799 } 00:19:17.799 ]' 00:19:17.799 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.799 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.799 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.799 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.799 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.799 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.799 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.799 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.057 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:19:18.057 13:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjVjNWNmZTEzOGIzODg4YmVjMGJiNWMwN2U1M2YwOGJhYmI0YWZlNjU0NzFkNzk1O4K/0w==: --dhchap-ctrl-secret DHHC-1:03:NzI1ODkyOTlhZjBjZmEyMDUwNjM2YTUwNjIyZDVkNTQ2Y2I0ZjE5OWM0NmVlNzI3M2IzZDczZWQ1NTA5YzYyMfdGkUc=: 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:18.624 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:19.192 request: 00:19:19.192 { 00:19:19.192 "name": "nvme0", 00:19:19.192 "trtype": "tcp", 00:19:19.192 "traddr": "10.0.0.2", 00:19:19.192 "adrfam": "ipv4", 00:19:19.192 "trsvcid": "4420", 00:19:19.192 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:19.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:19.192 "prchk_reftag": false, 00:19:19.192 "prchk_guard": false, 00:19:19.192 "hdgst": false, 00:19:19.192 "ddgst": false, 00:19:19.192 "dhchap_key": "key2", 00:19:19.192 "allow_unrecognized_csi": false, 00:19:19.192 "method": "bdev_nvme_attach_controller", 00:19:19.192 "req_id": 1 00:19:19.192 } 00:19:19.192 Got JSON-RPC error response 00:19:19.192 response: 00:19:19.192 { 00:19:19.192 "code": -5, 00:19:19.192 "message": "Input/output error" 00:19:19.192 } 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.192 13:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:19.760 request: 00:19:19.760 { 00:19:19.760 "name": "nvme0", 00:19:19.760 "trtype": "tcp", 00:19:19.760 "traddr": "10.0.0.2", 00:19:19.760 "adrfam": "ipv4", 00:19:19.760 "trsvcid": "4420", 00:19:19.760 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:19.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:19.760 "prchk_reftag": false, 00:19:19.760 "prchk_guard": false, 00:19:19.760 "hdgst": false, 00:19:19.760 "ddgst": false, 00:19:19.760 "dhchap_key": "key1", 00:19:19.760 "dhchap_ctrlr_key": "ckey2", 00:19:19.760 "allow_unrecognized_csi": false, 00:19:19.760 "method": "bdev_nvme_attach_controller", 00:19:19.760 "req_id": 1 00:19:19.760 } 00:19:19.760 Got JSON-RPC error response 00:19:19.760 response: 00:19:19.760 { 00:19:19.760 "code": -5, 00:19:19.760 "message": "Input/output error" 00:19:19.760 } 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.760 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.019 request: 00:19:20.019 { 00:19:20.019 "name": "nvme0", 00:19:20.019 "trtype": "tcp", 00:19:20.019 "traddr": "10.0.0.2", 00:19:20.019 "adrfam": "ipv4", 00:19:20.019 "trsvcid": "4420", 00:19:20.019 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:20.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:20.019 "prchk_reftag": false, 00:19:20.019 "prchk_guard": false, 00:19:20.019 "hdgst": false, 00:19:20.019 "ddgst": false, 00:19:20.019 "dhchap_key": "key1", 00:19:20.019 "dhchap_ctrlr_key": "ckey1", 00:19:20.019 "allow_unrecognized_csi": false, 00:19:20.019 "method": "bdev_nvme_attach_controller", 00:19:20.019 "req_id": 1 00:19:20.019 } 00:19:20.019 Got JSON-RPC error response 00:19:20.019 response: 00:19:20.019 { 00:19:20.019 "code": -5, 00:19:20.019 "message": "Input/output error" 00:19:20.019 } 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1974686 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1974686 ']' 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1974686 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.019 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1974686 00:19:20.278 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.278 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.278 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1974686' 00:19:20.278 killing process with pid 1974686 00:19:20.278 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1974686 00:19:20.278 13:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1974686 00:19:20.278 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:20.278 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:20.278 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.278 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.278 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1996819 00:19:20.278 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:20.278 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1996819 00:19:20.278 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1996819 ']' 00:19:20.278 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.278 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.278 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.278 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.278 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1996819 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1996819 ']' 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.537 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.797 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.797 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:20.797 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:20.797 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.797 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.797 null0 00:19:20.797 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.797 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:20.797 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UU4 00:19:20.797 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.797 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.F6m ]] 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.F6m 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.TLX 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.RhJ ]] 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RhJ 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0Qk 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.XEK ]] 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XEK 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.056 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.zgI 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.057 13:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.624 nvme0n1 00:19:21.883 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.883 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.883 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.883 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.883 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.883 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.883 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.883 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.883 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.883 { 00:19:21.883 "cntlid": 1, 00:19:21.883 "qid": 0, 00:19:21.883 "state": "enabled", 00:19:21.883 "thread": "nvmf_tgt_poll_group_000", 00:19:21.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:21.883 "listen_address": { 00:19:21.883 "trtype": "TCP", 00:19:21.883 "adrfam": "IPv4", 00:19:21.883 "traddr": "10.0.0.2", 00:19:21.883 "trsvcid": "4420" 00:19:21.883 }, 00:19:21.883 "peer_address": { 00:19:21.883 "trtype": "TCP", 00:19:21.883 "adrfam": "IPv4", 00:19:21.883 "traddr": "10.0.0.1", 00:19:21.883 "trsvcid": "37892" 00:19:21.883 }, 00:19:21.883 "auth": { 00:19:21.883 "state": "completed", 00:19:21.883 "digest": "sha512", 00:19:21.883 "dhgroup": "ffdhe8192" 00:19:21.883 } 00:19:21.883 } 00:19:21.883 ]' 00:19:21.883 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.883 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.883 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.142 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.142 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.142 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.142 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.142 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.401 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:19:22.401 13:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:22.969 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:23.229 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.229 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:23.229 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.229 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:23.229 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.229 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.229 request: 00:19:23.229 { 00:19:23.229 "name": "nvme0", 00:19:23.229 "trtype": "tcp", 00:19:23.229 "traddr": "10.0.0.2", 00:19:23.229 "adrfam": "ipv4", 00:19:23.229 "trsvcid": "4420", 00:19:23.229 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:23.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:23.229 "prchk_reftag": false, 00:19:23.229 "prchk_guard": false, 00:19:23.229 "hdgst": false, 00:19:23.229 "ddgst": false, 00:19:23.229 "dhchap_key": "key3", 00:19:23.229 "allow_unrecognized_csi": false, 00:19:23.229 "method": "bdev_nvme_attach_controller", 00:19:23.229 "req_id": 1 00:19:23.229 } 00:19:23.229 Got JSON-RPC error response 00:19:23.229 response: 00:19:23.229 { 00:19:23.230 "code": -5, 00:19:23.230 "message": "Input/output error" 00:19:23.230 } 00:19:23.230 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:23.230 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:23.230 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:23.230 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:23.230 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:23.230 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:23.230 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:23.230 13:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:23.489 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:23.489 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:23.489 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:23.489 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:23.489 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.489 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:23.489 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.489 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:23.489 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.489 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.748 request: 00:19:23.748 { 00:19:23.748 "name": "nvme0", 00:19:23.748 "trtype": "tcp", 00:19:23.748 "traddr": "10.0.0.2", 00:19:23.748 "adrfam": "ipv4", 00:19:23.748 "trsvcid": "4420", 00:19:23.748 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:23.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:23.748 "prchk_reftag": false, 00:19:23.748 "prchk_guard": false, 00:19:23.748 "hdgst": false, 00:19:23.748 "ddgst": false, 00:19:23.748 "dhchap_key": "key3", 00:19:23.748 "allow_unrecognized_csi": false, 00:19:23.748 "method": "bdev_nvme_attach_controller", 00:19:23.748 "req_id": 1 00:19:23.748 } 00:19:23.748 Got JSON-RPC error response 00:19:23.748 response: 00:19:23.748 { 00:19:23.748 "code": -5, 00:19:23.748 "message": "Input/output error" 00:19:23.748 } 00:19:23.748 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:23.748 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:23.748 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:23.748 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:23.748 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:23.748 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:23.748 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:23.748 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:23.748 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:23.748 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:24.007 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:24.008 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:24.267 request: 00:19:24.267 { 00:19:24.267 "name": "nvme0", 00:19:24.267 "trtype": "tcp", 00:19:24.267 "traddr": "10.0.0.2", 00:19:24.267 "adrfam": "ipv4", 00:19:24.267 "trsvcid": "4420", 00:19:24.267 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:24.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:24.267 "prchk_reftag": false, 00:19:24.267 "prchk_guard": false, 00:19:24.267 "hdgst": false, 00:19:24.267 "ddgst": false, 00:19:24.267 "dhchap_key": "key0", 00:19:24.267 "dhchap_ctrlr_key": "key1", 00:19:24.267 "allow_unrecognized_csi": false, 00:19:24.267 "method": "bdev_nvme_attach_controller", 00:19:24.267 "req_id": 1 00:19:24.267 } 00:19:24.267 Got JSON-RPC error response 00:19:24.267 response: 00:19:24.267 { 00:19:24.267 "code": -5, 00:19:24.267 "message": "Input/output error" 00:19:24.267 } 00:19:24.267 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:24.267 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.267 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.267 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.267 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:24.267 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:24.267 13:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:24.527 nvme0n1 00:19:24.527 13:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:24.527 13:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:24.527 13:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.786 13:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.786 13:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.786 13:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.045 13:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:19:25.045 13:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.045 13:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.045 13:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.045 13:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:25.046 13:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:25.046 13:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:25.613 nvme0n1 00:19:25.613 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:25.613 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:25.613 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.872 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.872 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:25.872 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.872 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.872 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.872 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:25.872 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:25.872 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.132 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.132 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:19:26.132 13:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: --dhchap-ctrl-secret DHHC-1:03:MzE3NjMxYzFmMGUxMmY2OGYxMmFjZGI5NGVkODc0ZDExYTk2NTQwYzlhNTc0MWVkYzdiYjc0NzRjY2JhMjA0Mi53My0=: 00:19:26.700 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:26.700 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:26.700 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:26.700 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:26.700 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:26.700 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:26.700 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:26.700 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.700 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.960 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:26.960 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:26.960 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:26.960 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:26.960 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.960 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:26.960 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.960 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:26.960 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:26.960 13:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:27.219 request: 00:19:27.219 { 00:19:27.219 "name": "nvme0", 00:19:27.219 "trtype": "tcp", 00:19:27.219 "traddr": "10.0.0.2", 00:19:27.219 "adrfam": "ipv4", 00:19:27.219 "trsvcid": "4420", 00:19:27.219 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:27.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:27.219 "prchk_reftag": false, 00:19:27.219 "prchk_guard": false, 00:19:27.219 "hdgst": false, 00:19:27.219 "ddgst": false, 00:19:27.219 "dhchap_key": "key1", 00:19:27.219 "allow_unrecognized_csi": false, 00:19:27.219 "method": "bdev_nvme_attach_controller", 00:19:27.219 "req_id": 1 00:19:27.220 } 00:19:27.220 Got JSON-RPC error response 00:19:27.220 response: 00:19:27.220 { 00:19:27.220 "code": -5, 00:19:27.220 "message": "Input/output error" 00:19:27.220 } 00:19:27.220 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:27.220 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.220 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.220 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.220 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:27.220 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:27.220 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:28.159 nvme0n1 00:19:28.159 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:28.159 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:28.159 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.159 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.159 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.159 13:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.419 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:28.419 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.419 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.419 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.419 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:28.419 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:28.419 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:28.677 nvme0n1 00:19:28.677 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:28.678 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.678 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:28.937 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.937 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.937 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: '' 2s 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: ]] 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Y2U4MDkxOGRkYjIwY2IwYzIxZDhmMzYwYTEzMDJkN2KVNhjT: 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:29.196 13:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: 2s 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: ]] 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MDljYzYyNDMyOTQwMjhmZjdmYWY0MTRkOWM2M2I3MmMxNjVlZGU5YmZmZWZmMjczB3+8pw==: 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:31.101 13:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:33.637 13:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:33.898 nvme0n1 00:19:33.898 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:33.898 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.898 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.898 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.898 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:33.898 13:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:34.465 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:34.465 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.465 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:34.723 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.723 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:34.723 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.723 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.723 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.723 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:34.723 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:34.981 13:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:35.546 request: 00:19:35.546 { 00:19:35.546 "name": "nvme0", 00:19:35.546 "dhchap_key": "key1", 00:19:35.546 "dhchap_ctrlr_key": "key3", 00:19:35.546 "method": "bdev_nvme_set_keys", 00:19:35.546 "req_id": 1 00:19:35.546 } 00:19:35.546 Got JSON-RPC error response 00:19:35.546 response: 00:19:35.546 { 00:19:35.546 "code": -13, 00:19:35.546 "message": "Permission denied" 00:19:35.546 } 00:19:35.546 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:35.546 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:35.546 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:35.546 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:35.546 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:35.546 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:35.546 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.804 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:35.804 13:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:36.741 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:36.741 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:36.741 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.001 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:37.001 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:37.001 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.001 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.001 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.001 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:37.001 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:37.001 13:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:37.570 nvme0n1 00:19:37.570 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:37.570 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.570 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.570 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.570 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:37.570 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:37.570 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:37.570 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:37.570 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.570 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:37.570 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.570 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:37.570 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:38.140 request: 00:19:38.140 { 00:19:38.140 "name": "nvme0", 00:19:38.140 "dhchap_key": "key2", 00:19:38.140 "dhchap_ctrlr_key": "key0", 00:19:38.140 "method": "bdev_nvme_set_keys", 00:19:38.140 "req_id": 1 00:19:38.140 } 00:19:38.140 Got JSON-RPC error response 00:19:38.140 response: 00:19:38.140 { 00:19:38.140 "code": -13, 00:19:38.140 "message": "Permission denied" 00:19:38.140 } 00:19:38.140 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:38.140 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:38.140 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:38.140 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:38.140 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:38.140 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:38.140 13:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.399 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:38.399 13:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:39.336 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:39.336 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:39.336 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1974755 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1974755 ']' 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1974755 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1974755 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1974755' 00:19:39.596 killing process with pid 1974755 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1974755 00:19:39.596 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1974755 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:39.856 rmmod nvme_tcp 00:19:39.856 rmmod nvme_fabrics 00:19:39.856 rmmod nvme_keyring 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1996819 ']' 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1996819 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1996819 ']' 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1996819 00:19:39.856 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1996819 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1996819' 00:19:40.115 killing process with pid 1996819 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1996819 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1996819 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.115 13:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.653 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:42.653 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.UU4 /tmp/spdk.key-sha256.TLX /tmp/spdk.key-sha384.0Qk /tmp/spdk.key-sha512.zgI /tmp/spdk.key-sha512.F6m /tmp/spdk.key-sha384.RhJ /tmp/spdk.key-sha256.XEK '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:42.653 00:19:42.653 real 2m31.396s 00:19:42.653 user 5m49.804s 00:19:42.653 sys 0m23.606s 00:19:42.653 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.653 13:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.653 ************************************ 00:19:42.653 END TEST nvmf_auth_target 00:19:42.653 ************************************ 00:19:42.653 13:03:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:42.653 13:03:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:42.653 13:03:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:42.653 13:03:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.653 13:03:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:42.653 ************************************ 00:19:42.653 START TEST nvmf_bdevio_no_huge 00:19:42.653 ************************************ 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:42.653 * Looking for test storage... 00:19:42.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:42.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.653 --rc genhtml_branch_coverage=1 00:19:42.653 --rc genhtml_function_coverage=1 00:19:42.653 --rc genhtml_legend=1 00:19:42.653 --rc geninfo_all_blocks=1 00:19:42.653 --rc geninfo_unexecuted_blocks=1 00:19:42.653 00:19:42.653 ' 00:19:42.653 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:42.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.653 --rc genhtml_branch_coverage=1 00:19:42.653 --rc genhtml_function_coverage=1 00:19:42.653 --rc genhtml_legend=1 00:19:42.653 --rc geninfo_all_blocks=1 00:19:42.653 --rc geninfo_unexecuted_blocks=1 00:19:42.653 00:19:42.653 ' 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:42.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.654 --rc genhtml_branch_coverage=1 00:19:42.654 --rc genhtml_function_coverage=1 00:19:42.654 --rc genhtml_legend=1 00:19:42.654 --rc geninfo_all_blocks=1 00:19:42.654 --rc geninfo_unexecuted_blocks=1 00:19:42.654 00:19:42.654 ' 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:42.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.654 --rc genhtml_branch_coverage=1 00:19:42.654 --rc genhtml_function_coverage=1 00:19:42.654 --rc genhtml_legend=1 00:19:42.654 --rc geninfo_all_blocks=1 00:19:42.654 --rc geninfo_unexecuted_blocks=1 00:19:42.654 00:19:42.654 ' 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:42.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:42.654 13:03:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:47.933 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:47.933 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:47.933 Found net devices under 0000:86:00.0: cvl_0_0 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:47.933 Found net devices under 0000:86:00.1: cvl_0_1 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.933 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:47.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:19:47.934 00:19:47.934 --- 10.0.0.2 ping statistics --- 00:19:47.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.934 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:19:47.934 00:19:47.934 --- 10.0.0.1 ping statistics --- 00:19:47.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.934 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2003599 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2003599 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2003599 ']' 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.934 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.194 [2024-11-29 13:03:47.774873] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:19:48.194 [2024-11-29 13:03:47.774920] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:48.194 [2024-11-29 13:03:47.848123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.194 [2024-11-29 13:03:47.895536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.194 [2024-11-29 13:03:47.895570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.194 [2024-11-29 13:03:47.895577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.194 [2024-11-29 13:03:47.895583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.194 [2024-11-29 13:03:47.895588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.194 [2024-11-29 13:03:47.896838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:48.194 [2024-11-29 13:03:47.896959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:48.194 [2024-11-29 13:03:47.897059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.194 [2024-11-29 13:03:47.897059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:48.194 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.194 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:48.194 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.194 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.194 13:03:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.453 [2024-11-29 13:03:48.045573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.453 Malloc0 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:48.453 [2024-11-29 13:03:48.081839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:48.453 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:48.454 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:48.454 { 00:19:48.454 "params": { 00:19:48.454 "name": "Nvme$subsystem", 00:19:48.454 "trtype": "$TEST_TRANSPORT", 00:19:48.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.454 "adrfam": "ipv4", 00:19:48.454 "trsvcid": "$NVMF_PORT", 00:19:48.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.454 "hdgst": ${hdgst:-false}, 00:19:48.454 "ddgst": ${ddgst:-false} 00:19:48.454 }, 00:19:48.454 "method": "bdev_nvme_attach_controller" 00:19:48.454 } 00:19:48.454 EOF 00:19:48.454 )") 00:19:48.454 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:48.454 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:48.454 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:48.454 13:03:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:48.454 "params": { 00:19:48.454 "name": "Nvme1", 00:19:48.454 "trtype": "tcp", 00:19:48.454 "traddr": "10.0.0.2", 00:19:48.454 "adrfam": "ipv4", 00:19:48.454 "trsvcid": "4420", 00:19:48.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.454 "hdgst": false, 00:19:48.454 "ddgst": false 00:19:48.454 }, 00:19:48.454 "method": "bdev_nvme_attach_controller" 00:19:48.454 }' 00:19:48.454 [2024-11-29 13:03:48.135272] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:19:48.454 [2024-11-29 13:03:48.135315] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2003622 ] 00:19:48.454 [2024-11-29 13:03:48.203177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:48.454 [2024-11-29 13:03:48.252618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.454 [2024-11-29 13:03:48.252715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.454 [2024-11-29 13:03:48.252717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.713 I/O targets: 00:19:48.713 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:48.713 00:19:48.713 00:19:48.713 CUnit - A unit testing framework for C - Version 2.1-3 00:19:48.713 http://cunit.sourceforge.net/ 00:19:48.713 00:19:48.713 00:19:48.713 Suite: bdevio tests on: Nvme1n1 00:19:48.971 Test: blockdev write read block ...passed 00:19:48.971 Test: blockdev write zeroes read block ...passed 00:19:48.971 Test: blockdev write zeroes read no split ...passed 00:19:48.971 Test: blockdev write zeroes read split ...passed 00:19:48.971 Test: blockdev write zeroes read split partial ...passed 00:19:48.971 Test: blockdev reset ...[2024-11-29 13:03:48.617423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:48.971 [2024-11-29 13:03:48.617491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191d8e0 (9): Bad file descriptor 00:19:48.971 [2024-11-29 13:03:48.647880] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:48.971 passed 00:19:48.971 Test: blockdev write read 8 blocks ...passed 00:19:48.971 Test: blockdev write read size > 128k ...passed 00:19:48.971 Test: blockdev write read invalid size ...passed 00:19:48.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:48.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:48.971 Test: blockdev write read max offset ...passed 00:19:49.231 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:49.231 Test: blockdev writev readv 8 blocks ...passed 00:19:49.231 Test: blockdev writev readv 30 x 1block ...passed 00:19:49.231 Test: blockdev writev readv block ...passed 00:19:49.231 Test: blockdev writev readv size > 128k ...passed 00:19:49.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:49.231 Test: blockdev comparev and writev ...[2024-11-29 13:03:48.859662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.231 [2024-11-29 13:03:48.859692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.231 [2024-11-29 13:03:48.859711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.231 [2024-11-29 13:03:48.859722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:49.231 [2024-11-29 13:03:48.860001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.231 [2024-11-29 13:03:48.860017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:49.231 [2024-11-29 13:03:48.860034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.231 [2024-11-29 13:03:48.860046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:49.231 [2024-11-29 13:03:48.860308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.231 [2024-11-29 13:03:48.860320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:49.231 [2024-11-29 13:03:48.860337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.231 [2024-11-29 13:03:48.860348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:49.231 [2024-11-29 13:03:48.860600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.231 [2024-11-29 13:03:48.860612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.231 [2024-11-29 13:03:48.860627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.231 [2024-11-29 13:03:48.860639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:49.231 passed 00:19:49.231 Test: blockdev nvme passthru rw ...passed 00:19:49.231 Test: blockdev nvme passthru vendor specific ...[2024-11-29 13:03:48.942373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.231 [2024-11-29 13:03:48.942390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:49.231 [2024-11-29 13:03:48.942510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.231 [2024-11-29 13:03:48.942530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:49.231 [2024-11-29 13:03:48.942645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.231 [2024-11-29 13:03:48.942657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:49.231 [2024-11-29 13:03:48.942771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.231 [2024-11-29 13:03:48.942782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:49.231 passed 00:19:49.231 Test: blockdev nvme admin passthru ...passed 00:19:49.231 Test: blockdev copy ...passed 00:19:49.231 00:19:49.231 Run Summary: Type Total Ran Passed Failed Inactive 00:19:49.231 suites 1 1 n/a 0 0 00:19:49.231 tests 23 23 23 0 0 00:19:49.231 asserts 152 152 152 0 n/a 00:19:49.231 00:19:49.231 Elapsed time = 1.003 seconds 00:19:49.490 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:49.490 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.490 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:49.490 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.490 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:49.490 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:49.490 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:49.490 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:49.490 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:49.490 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:49.490 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:49.490 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:49.490 rmmod nvme_tcp 00:19:49.490 rmmod nvme_fabrics 00:19:49.758 rmmod nvme_keyring 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2003599 ']' 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2003599 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2003599 ']' 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2003599 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2003599 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2003599' 00:19:49.758 killing process with pid 2003599 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2003599 00:19:49.758 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2003599 00:19:50.073 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:50.073 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:50.073 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:50.073 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:50.073 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:50.073 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:50.073 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:50.073 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:50.073 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:50.073 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.073 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.073 13:03:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.198 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:52.198 00:19:52.198 real 0m9.742s 00:19:52.198 user 0m10.606s 00:19:52.198 sys 0m5.023s 00:19:52.198 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.198 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:52.198 ************************************ 00:19:52.198 END TEST nvmf_bdevio_no_huge 00:19:52.198 ************************************ 00:19:52.198 13:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:52.198 13:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:52.198 13:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:52.198 13:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:52.198 ************************************ 00:19:52.198 START TEST nvmf_tls 00:19:52.198 ************************************ 00:19:52.198 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:52.198 * Looking for test storage... 00:19:52.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:52.198 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:52.198 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:52.198 13:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:52.198 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:52.458 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:52.458 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:52.458 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:52.458 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:52.459 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:52.459 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:52.459 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:52.459 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:52.459 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:52.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.459 --rc genhtml_branch_coverage=1 00:19:52.459 --rc genhtml_function_coverage=1 00:19:52.459 --rc genhtml_legend=1 00:19:52.459 --rc geninfo_all_blocks=1 00:19:52.459 --rc geninfo_unexecuted_blocks=1 00:19:52.459 00:19:52.459 ' 00:19:52.459 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:52.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.459 --rc genhtml_branch_coverage=1 00:19:52.459 --rc genhtml_function_coverage=1 00:19:52.459 --rc genhtml_legend=1 00:19:52.459 --rc geninfo_all_blocks=1 00:19:52.459 --rc geninfo_unexecuted_blocks=1 00:19:52.459 00:19:52.459 ' 00:19:52.459 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:52.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.459 --rc genhtml_branch_coverage=1 00:19:52.459 --rc genhtml_function_coverage=1 00:19:52.459 --rc genhtml_legend=1 00:19:52.459 --rc geninfo_all_blocks=1 00:19:52.459 --rc geninfo_unexecuted_blocks=1 00:19:52.459 00:19:52.459 ' 00:19:52.459 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:52.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.460 --rc genhtml_branch_coverage=1 00:19:52.460 --rc genhtml_function_coverage=1 00:19:52.460 --rc genhtml_legend=1 00:19:52.460 --rc geninfo_all_blocks=1 00:19:52.460 --rc geninfo_unexecuted_blocks=1 00:19:52.460 00:19:52.460 ' 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.460 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.461 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.461 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.461 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.461 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.461 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:52.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:52.462 13:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:57.736 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:57.736 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:57.736 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:57.737 Found net devices under 0000:86:00.0: cvl_0_0 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:57.737 Found net devices under 0000:86:00.1: cvl_0_1 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:57.737 13:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:57.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:19:57.737 00:19:57.737 --- 10.0.0.2 ping statistics --- 00:19:57.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.737 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:57.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:19:57.737 00:19:57.737 --- 10.0.0.1 ping statistics --- 00:19:57.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.737 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2007389 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2007389 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2007389 ']' 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.737 [2024-11-29 13:03:57.304672] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:19:57.737 [2024-11-29 13:03:57.304715] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.737 [2024-11-29 13:03:57.371089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.737 [2024-11-29 13:03:57.412722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.737 [2024-11-29 13:03:57.412757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.737 [2024-11-29 13:03:57.412765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.737 [2024-11-29 13:03:57.412775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.737 [2024-11-29 13:03:57.412780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.737 [2024-11-29 13:03:57.413353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:57.737 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:57.997 true 00:19:57.997 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:57.997 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:58.256 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:58.256 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:58.256 13:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:58.256 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:58.256 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:58.515 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:58.515 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:58.515 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:58.773 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:58.774 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:59.033 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:59.033 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:59.033 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.033 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:59.033 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:59.033 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:59.033 13:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:59.292 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.292 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:59.551 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:59.551 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:59.551 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:59.810 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:59.810 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:59.810 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:59.810 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:59.810 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:59.810 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:59.810 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.810 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:59.810 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:59.810 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:59.810 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.1j8JZXr9VG 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.oXHSXcE3qO 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.1j8JZXr9VG 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.oXHSXcE3qO 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:00.069 13:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:00.328 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.1j8JZXr9VG 00:20:00.328 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1j8JZXr9VG 00:20:00.328 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:00.587 [2024-11-29 13:04:00.318438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.587 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:00.846 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:01.105 [2024-11-29 13:04:00.687380] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.105 [2024-11-29 13:04:00.687590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.105 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:01.105 malloc0 00:20:01.105 13:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:01.363 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1j8JZXr9VG 00:20:01.623 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.882 13:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1j8JZXr9VG 00:20:11.864 Initializing NVMe Controllers 00:20:11.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:11.864 Initialization complete. Launching workers. 00:20:11.864 ======================================================== 00:20:11.864 Latency(us) 00:20:11.864 Device Information : IOPS MiB/s Average min max 00:20:11.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15967.60 62.37 4008.22 869.16 211124.91 00:20:11.864 ======================================================== 00:20:11.864 Total : 15967.60 62.37 4008.22 869.16 211124.91 00:20:11.864 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1j8JZXr9VG 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1j8JZXr9VG 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2009742 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2009742 /var/tmp/bdevperf.sock 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2009742 ']' 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.864 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.864 [2024-11-29 13:04:11.636319] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:11.864 [2024-11-29 13:04:11.636371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2009742 ] 00:20:12.123 [2024-11-29 13:04:11.694502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.123 [2024-11-29 13:04:11.737836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.123 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.123 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:12.123 13:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1j8JZXr9VG 00:20:12.382 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:12.382 [2024-11-29 13:04:12.194065] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.642 TLSTESTn1 00:20:12.642 13:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:12.642 Running I/O for 10 seconds... 00:20:14.955 5268.00 IOPS, 20.58 MiB/s [2024-11-29T12:04:15.710Z] 5448.00 IOPS, 21.28 MiB/s [2024-11-29T12:04:16.645Z] 5477.67 IOPS, 21.40 MiB/s [2024-11-29T12:04:17.581Z] 5500.50 IOPS, 21.49 MiB/s [2024-11-29T12:04:18.518Z] 5491.60 IOPS, 21.45 MiB/s [2024-11-29T12:04:19.454Z] 5495.00 IOPS, 21.46 MiB/s [2024-11-29T12:04:20.391Z] 5458.71 IOPS, 21.32 MiB/s [2024-11-29T12:04:21.769Z] 5455.25 IOPS, 21.31 MiB/s [2024-11-29T12:04:22.706Z] 5463.33 IOPS, 21.34 MiB/s [2024-11-29T12:04:22.706Z] 5464.20 IOPS, 21.34 MiB/s 00:20:22.886 Latency(us) 00:20:22.886 [2024-11-29T12:04:22.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.886 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:22.886 Verification LBA range: start 0x0 length 0x2000 00:20:22.886 TLSTESTn1 : 10.02 5467.22 21.36 0.00 0.00 23375.44 4957.94 23706.94 00:20:22.886 [2024-11-29T12:04:22.706Z] =================================================================================================================== 00:20:22.886 [2024-11-29T12:04:22.706Z] Total : 5467.22 21.36 0.00 0.00 23375.44 4957.94 23706.94 00:20:22.886 { 00:20:22.886 "results": [ 00:20:22.886 { 00:20:22.886 "job": "TLSTESTn1", 00:20:22.886 "core_mask": "0x4", 00:20:22.886 "workload": "verify", 00:20:22.886 "status": "finished", 00:20:22.886 "verify_range": { 00:20:22.886 "start": 0, 00:20:22.886 "length": 8192 00:20:22.886 }, 00:20:22.886 "queue_depth": 128, 00:20:22.886 "io_size": 4096, 00:20:22.886 "runtime": 10.017701, 00:20:22.886 "iops": 5467.222469506726, 00:20:22.886 "mibps": 21.35633777151065, 00:20:22.886 "io_failed": 0, 00:20:22.886 "io_timeout": 0, 00:20:22.886 "avg_latency_us": 23375.43957278276, 00:20:22.886 "min_latency_us": 4957.940869565217, 00:20:22.886 "max_latency_us": 23706.935652173914 00:20:22.886 } 00:20:22.886 ], 00:20:22.886 "core_count": 1 00:20:22.886 } 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2009742 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2009742 ']' 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2009742 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2009742 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2009742' 00:20:22.886 killing process with pid 2009742 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2009742 00:20:22.886 Received shutdown signal, test time was about 10.000000 seconds 00:20:22.886 00:20:22.886 Latency(us) 00:20:22.886 [2024-11-29T12:04:22.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.886 [2024-11-29T12:04:22.706Z] =================================================================================================================== 00:20:22.886 [2024-11-29T12:04:22.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2009742 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oXHSXcE3qO 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oXHSXcE3qO 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oXHSXcE3qO 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oXHSXcE3qO 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2011570 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2011570 /var/tmp/bdevperf.sock 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2011570 ']' 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.886 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.886 [2024-11-29 13:04:22.699282] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:22.886 [2024-11-29 13:04:22.699331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2011570 ] 00:20:23.145 [2024-11-29 13:04:22.756025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.145 [2024-11-29 13:04:22.796093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.145 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.145 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:23.145 13:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oXHSXcE3qO 00:20:23.404 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:23.663 [2024-11-29 13:04:23.263986] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.663 [2024-11-29 13:04:23.274546] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:23.663 [2024-11-29 13:04:23.275382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c01a0 (107): Transport endpoint is not connected 00:20:23.663 [2024-11-29 13:04:23.276374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c01a0 (9): Bad file descriptor 00:20:23.663 [2024-11-29 13:04:23.277376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:23.663 [2024-11-29 13:04:23.277390] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:23.663 [2024-11-29 13:04:23.277397] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:23.663 [2024-11-29 13:04:23.277405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:23.663 request: 00:20:23.663 { 00:20:23.663 "name": "TLSTEST", 00:20:23.663 "trtype": "tcp", 00:20:23.663 "traddr": "10.0.0.2", 00:20:23.663 "adrfam": "ipv4", 00:20:23.663 "trsvcid": "4420", 00:20:23.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.663 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.663 "prchk_reftag": false, 00:20:23.663 "prchk_guard": false, 00:20:23.663 "hdgst": false, 00:20:23.663 "ddgst": false, 00:20:23.663 "psk": "key0", 00:20:23.663 "allow_unrecognized_csi": false, 00:20:23.663 "method": "bdev_nvme_attach_controller", 00:20:23.663 "req_id": 1 00:20:23.663 } 00:20:23.663 Got JSON-RPC error response 00:20:23.663 response: 00:20:23.663 { 00:20:23.663 "code": -5, 00:20:23.663 "message": "Input/output error" 00:20:23.663 } 00:20:23.663 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2011570 00:20:23.663 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2011570 ']' 00:20:23.663 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2011570 00:20:23.663 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:23.663 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.663 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2011570 00:20:23.663 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:23.663 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:23.663 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2011570' 00:20:23.663 killing process with pid 2011570 00:20:23.663 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2011570 00:20:23.663 Received shutdown signal, test time was about 10.000000 seconds 00:20:23.663 00:20:23.663 Latency(us) 00:20:23.663 [2024-11-29T12:04:23.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.663 [2024-11-29T12:04:23.483Z] =================================================================================================================== 00:20:23.663 [2024-11-29T12:04:23.483Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:23.663 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2011570 00:20:23.922 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:23.922 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:23.922 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:23.922 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:23.922 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:23.922 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1j8JZXr9VG 00:20:23.922 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:23.922 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1j8JZXr9VG 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1j8JZXr9VG 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1j8JZXr9VG 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2011604 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2011604 /var/tmp/bdevperf.sock 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2011604 ']' 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.923 [2024-11-29 13:04:23.550656] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:23.923 [2024-11-29 13:04:23.550702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2011604 ] 00:20:23.923 [2024-11-29 13:04:23.608538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.923 [2024-11-29 13:04:23.652713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:23.923 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1j8JZXr9VG 00:20:24.182 13:04:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:24.441 [2024-11-29 13:04:24.118145] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.441 [2024-11-29 13:04:24.126665] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:24.441 [2024-11-29 13:04:24.126686] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:24.441 [2024-11-29 13:04:24.126709] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:24.441 [2024-11-29 13:04:24.127511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac31a0 (107): Transport endpoint is not connected 00:20:24.441 [2024-11-29 13:04:24.128505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac31a0 (9): Bad file descriptor 00:20:24.441 [2024-11-29 13:04:24.129507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:24.441 [2024-11-29 13:04:24.129517] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:24.441 [2024-11-29 13:04:24.129524] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:24.441 [2024-11-29 13:04:24.129532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:24.441 request: 00:20:24.441 { 00:20:24.441 "name": "TLSTEST", 00:20:24.441 "trtype": "tcp", 00:20:24.441 "traddr": "10.0.0.2", 00:20:24.441 "adrfam": "ipv4", 00:20:24.441 "trsvcid": "4420", 00:20:24.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.441 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:24.441 "prchk_reftag": false, 00:20:24.441 "prchk_guard": false, 00:20:24.441 "hdgst": false, 00:20:24.441 "ddgst": false, 00:20:24.441 "psk": "key0", 00:20:24.441 "allow_unrecognized_csi": false, 00:20:24.441 "method": "bdev_nvme_attach_controller", 00:20:24.441 "req_id": 1 00:20:24.441 } 00:20:24.441 Got JSON-RPC error response 00:20:24.441 response: 00:20:24.441 { 00:20:24.441 "code": -5, 00:20:24.441 "message": "Input/output error" 00:20:24.441 } 00:20:24.441 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2011604 00:20:24.441 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2011604 ']' 00:20:24.441 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2011604 00:20:24.441 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.441 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.441 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2011604 00:20:24.441 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:24.441 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:24.441 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2011604' 00:20:24.441 killing process with pid 2011604 00:20:24.441 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2011604 00:20:24.441 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.441 00:20:24.441 Latency(us) 00:20:24.441 [2024-11-29T12:04:24.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.441 [2024-11-29T12:04:24.261Z] =================================================================================================================== 00:20:24.441 [2024-11-29T12:04:24.261Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:24.441 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2011604 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1j8JZXr9VG 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1j8JZXr9VG 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1j8JZXr9VG 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1j8JZXr9VG 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2011827 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2011827 /var/tmp/bdevperf.sock 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2011827 ']' 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.701 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.701 [2024-11-29 13:04:24.411590] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:24.701 [2024-11-29 13:04:24.411639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2011827 ] 00:20:24.701 [2024-11-29 13:04:24.469496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.701 [2024-11-29 13:04:24.511942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.960 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.960 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:24.960 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1j8JZXr9VG 00:20:25.220 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:25.220 [2024-11-29 13:04:24.957304] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.220 [2024-11-29 13:04:24.968819] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:25.220 [2024-11-29 13:04:24.968840] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:25.220 [2024-11-29 13:04:24.968863] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:25.220 [2024-11-29 13:04:24.969714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eff1a0 (107): Transport endpoint is not connected 00:20:25.220 [2024-11-29 13:04:24.970707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eff1a0 (9): Bad file descriptor 00:20:25.220 [2024-11-29 13:04:24.971708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:25.220 [2024-11-29 13:04:24.971719] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:25.220 [2024-11-29 13:04:24.971727] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:25.220 [2024-11-29 13:04:24.971735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:25.220 request: 00:20:25.220 { 00:20:25.220 "name": "TLSTEST", 00:20:25.220 "trtype": "tcp", 00:20:25.220 "traddr": "10.0.0.2", 00:20:25.220 "adrfam": "ipv4", 00:20:25.220 "trsvcid": "4420", 00:20:25.220 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:25.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.220 "prchk_reftag": false, 00:20:25.220 "prchk_guard": false, 00:20:25.220 "hdgst": false, 00:20:25.220 "ddgst": false, 00:20:25.220 "psk": "key0", 00:20:25.220 "allow_unrecognized_csi": false, 00:20:25.220 "method": "bdev_nvme_attach_controller", 00:20:25.220 "req_id": 1 00:20:25.220 } 00:20:25.220 Got JSON-RPC error response 00:20:25.220 response: 00:20:25.220 { 00:20:25.220 "code": -5, 00:20:25.220 "message": "Input/output error" 00:20:25.220 } 00:20:25.220 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2011827 00:20:25.220 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2011827 ']' 00:20:25.220 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2011827 00:20:25.220 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:25.220 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.220 13:04:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2011827 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2011827' 00:20:25.479 killing process with pid 2011827 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2011827 00:20:25.479 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.479 00:20:25.479 Latency(us) 00:20:25.479 [2024-11-29T12:04:25.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.479 [2024-11-29T12:04:25.299Z] =================================================================================================================== 00:20:25.479 [2024-11-29T12:04:25.299Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2011827 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2012014 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2012014 /var/tmp/bdevperf.sock 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2012014 ']' 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.479 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.479 [2024-11-29 13:04:25.249074] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:25.479 [2024-11-29 13:04:25.249121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012014 ] 00:20:25.739 [2024-11-29 13:04:25.308343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.739 [2024-11-29 13:04:25.350739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.739 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.739 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:25.739 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:25.998 [2024-11-29 13:04:25.607659] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:25.998 [2024-11-29 13:04:25.607685] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:25.998 request: 00:20:25.998 { 00:20:25.998 "name": "key0", 00:20:25.998 "path": "", 00:20:25.998 "method": "keyring_file_add_key", 00:20:25.998 "req_id": 1 00:20:25.998 } 00:20:25.998 Got JSON-RPC error response 00:20:25.998 response: 00:20:25.998 { 00:20:25.999 "code": -1, 00:20:25.999 "message": "Operation not permitted" 00:20:25.999 } 00:20:25.999 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:25.999 [2024-11-29 13:04:25.804251] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.999 [2024-11-29 13:04:25.804276] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:25.999 request: 00:20:25.999 { 00:20:25.999 "name": "TLSTEST", 00:20:25.999 "trtype": "tcp", 00:20:25.999 "traddr": "10.0.0.2", 00:20:25.999 "adrfam": "ipv4", 00:20:25.999 "trsvcid": "4420", 00:20:25.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.999 "prchk_reftag": false, 00:20:25.999 "prchk_guard": false, 00:20:25.999 "hdgst": false, 00:20:25.999 "ddgst": false, 00:20:25.999 "psk": "key0", 00:20:25.999 "allow_unrecognized_csi": false, 00:20:25.999 "method": "bdev_nvme_attach_controller", 00:20:25.999 "req_id": 1 00:20:25.999 } 00:20:25.999 Got JSON-RPC error response 00:20:25.999 response: 00:20:25.999 { 00:20:25.999 "code": -126, 00:20:25.999 "message": "Required key not available" 00:20:25.999 } 00:20:26.258 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2012014 00:20:26.258 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2012014 ']' 00:20:26.258 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2012014 00:20:26.258 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:26.258 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.258 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2012014 00:20:26.258 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:26.258 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:26.258 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2012014' 00:20:26.258 killing process with pid 2012014 00:20:26.258 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2012014 00:20:26.258 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.258 00:20:26.258 Latency(us) 00:20:26.258 [2024-11-29T12:04:26.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.258 [2024-11-29T12:04:26.078Z] =================================================================================================================== 00:20:26.258 [2024-11-29T12:04:26.078Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.258 13:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2012014 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2007389 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2007389 ']' 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2007389 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2007389 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2007389' 00:20:26.258 killing process with pid 2007389 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2007389 00:20:26.258 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2007389 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.NUzaKyQQ9e 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.NUzaKyQQ9e 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.518 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2012099 00:20:26.519 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2012099 00:20:26.519 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:26.519 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2012099 ']' 00:20:26.519 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.519 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.519 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.519 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.519 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.779 [2024-11-29 13:04:26.346097] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:26.779 [2024-11-29 13:04:26.346149] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.779 [2024-11-29 13:04:26.414203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.779 [2024-11-29 13:04:26.453256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.779 [2024-11-29 13:04:26.453295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.779 [2024-11-29 13:04:26.453303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.779 [2024-11-29 13:04:26.453309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.779 [2024-11-29 13:04:26.453314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.779 [2024-11-29 13:04:26.453942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.779 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.779 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:26.779 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:26.779 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:26.779 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.779 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.779 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.NUzaKyQQ9e 00:20:26.779 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NUzaKyQQ9e 00:20:26.779 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:27.039 [2024-11-29 13:04:26.753880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.039 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:27.296 13:04:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:27.555 [2024-11-29 13:04:27.122851] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:27.555 [2024-11-29 13:04:27.123092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.555 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:27.555 malloc0 00:20:27.555 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:27.814 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NUzaKyQQ9e 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NUzaKyQQ9e 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NUzaKyQQ9e 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2012459 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2012459 /var/tmp/bdevperf.sock 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2012459 ']' 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.073 13:04:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:28.332 [2024-11-29 13:04:27.908889] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:28.333 [2024-11-29 13:04:27.908936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012459 ] 00:20:28.333 [2024-11-29 13:04:27.966900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.333 [2024-11-29 13:04:28.010218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.333 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.333 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:28.333 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NUzaKyQQ9e 00:20:28.592 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:28.851 [2024-11-29 13:04:28.471652] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.851 TLSTESTn1 00:20:28.851 13:04:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:28.851 Running I/O for 10 seconds... 00:20:31.166 4608.00 IOPS, 18.00 MiB/s [2024-11-29T12:04:31.925Z] 4194.00 IOPS, 16.38 MiB/s [2024-11-29T12:04:32.861Z] 4215.00 IOPS, 16.46 MiB/s [2024-11-29T12:04:33.798Z] 4240.75 IOPS, 16.57 MiB/s [2024-11-29T12:04:34.733Z] 4175.80 IOPS, 16.31 MiB/s [2024-11-29T12:04:35.694Z] 4124.00 IOPS, 16.11 MiB/s [2024-11-29T12:04:37.069Z] 4076.86 IOPS, 15.93 MiB/s [2024-11-29T12:04:38.006Z] 4041.50 IOPS, 15.79 MiB/s [2024-11-29T12:04:38.944Z] 4013.11 IOPS, 15.68 MiB/s [2024-11-29T12:04:38.944Z] 3993.80 IOPS, 15.60 MiB/s 00:20:39.124 Latency(us) 00:20:39.124 [2024-11-29T12:04:38.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.124 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:39.124 Verification LBA range: start 0x0 length 0x2000 00:20:39.124 TLSTESTn1 : 10.03 3994.44 15.60 0.00 0.00 31983.01 5955.23 48325.68 00:20:39.124 [2024-11-29T12:04:38.944Z] =================================================================================================================== 00:20:39.124 [2024-11-29T12:04:38.944Z] Total : 3994.44 15.60 0.00 0.00 31983.01 5955.23 48325.68 00:20:39.124 { 00:20:39.124 "results": [ 00:20:39.124 { 00:20:39.124 "job": "TLSTESTn1", 00:20:39.124 "core_mask": "0x4", 00:20:39.124 "workload": "verify", 00:20:39.124 "status": "finished", 00:20:39.124 "verify_range": { 00:20:39.124 "start": 0, 00:20:39.124 "length": 8192 00:20:39.124 }, 00:20:39.124 "queue_depth": 128, 00:20:39.124 "io_size": 4096, 00:20:39.124 "runtime": 10.029944, 00:20:39.124 "iops": 3994.439051703579, 00:20:39.124 "mibps": 15.603277545717106, 00:20:39.124 "io_failed": 0, 00:20:39.124 "io_timeout": 0, 00:20:39.124 "avg_latency_us": 31983.010973746357, 00:20:39.124 "min_latency_us": 5955.227826086956, 00:20:39.124 "max_latency_us": 48325.67652173913 00:20:39.124 } 00:20:39.124 ], 00:20:39.124 "core_count": 1 00:20:39.124 } 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2012459 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2012459 ']' 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2012459 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2012459 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2012459' 00:20:39.124 killing process with pid 2012459 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2012459 00:20:39.124 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.124 00:20:39.124 Latency(us) 00:20:39.124 [2024-11-29T12:04:38.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.124 [2024-11-29T12:04:38.944Z] =================================================================================================================== 00:20:39.124 [2024-11-29T12:04:38.944Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2012459 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.NUzaKyQQ9e 00:20:39.124 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NUzaKyQQ9e 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NUzaKyQQ9e 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NUzaKyQQ9e 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NUzaKyQQ9e 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2014180 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2014180 /var/tmp/bdevperf.sock 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2014180 ']' 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.384 13:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.384 [2024-11-29 13:04:38.995618] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:39.384 [2024-11-29 13:04:38.995669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014180 ] 00:20:39.384 [2024-11-29 13:04:39.055101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.384 [2024-11-29 13:04:39.093106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.384 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.384 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:39.384 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NUzaKyQQ9e 00:20:39.643 [2024-11-29 13:04:39.365023] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NUzaKyQQ9e': 0100666 00:20:39.643 [2024-11-29 13:04:39.365059] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:39.643 request: 00:20:39.643 { 00:20:39.643 "name": "key0", 00:20:39.643 "path": "/tmp/tmp.NUzaKyQQ9e", 00:20:39.643 "method": "keyring_file_add_key", 00:20:39.643 "req_id": 1 00:20:39.643 } 00:20:39.643 Got JSON-RPC error response 00:20:39.643 response: 00:20:39.643 { 00:20:39.643 "code": -1, 00:20:39.643 "message": "Operation not permitted" 00:20:39.643 } 00:20:39.643 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:39.902 [2024-11-29 13:04:39.557603] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.902 [2024-11-29 13:04:39.557639] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:39.902 request: 00:20:39.902 { 00:20:39.902 "name": "TLSTEST", 00:20:39.902 "trtype": "tcp", 00:20:39.902 "traddr": "10.0.0.2", 00:20:39.902 "adrfam": "ipv4", 00:20:39.902 "trsvcid": "4420", 00:20:39.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.902 "prchk_reftag": false, 00:20:39.902 "prchk_guard": false, 00:20:39.902 "hdgst": false, 00:20:39.902 "ddgst": false, 00:20:39.902 "psk": "key0", 00:20:39.902 "allow_unrecognized_csi": false, 00:20:39.902 "method": "bdev_nvme_attach_controller", 00:20:39.902 "req_id": 1 00:20:39.902 } 00:20:39.902 Got JSON-RPC error response 00:20:39.902 response: 00:20:39.902 { 00:20:39.902 "code": -126, 00:20:39.902 "message": "Required key not available" 00:20:39.902 } 00:20:39.902 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2014180 00:20:39.902 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2014180 ']' 00:20:39.902 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2014180 00:20:39.902 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:39.902 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.902 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2014180 00:20:39.903 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:39.903 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:39.903 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2014180' 00:20:39.903 killing process with pid 2014180 00:20:39.903 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2014180 00:20:39.903 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.903 00:20:39.903 Latency(us) 00:20:39.903 [2024-11-29T12:04:39.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.903 [2024-11-29T12:04:39.723Z] =================================================================================================================== 00:20:39.903 [2024-11-29T12:04:39.723Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:39.903 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2014180 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2012099 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2012099 ']' 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2012099 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2012099 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2012099' 00:20:40.162 killing process with pid 2012099 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2012099 00:20:40.162 13:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2012099 00:20:40.421 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:40.421 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:40.421 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.421 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.421 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2014421 00:20:40.421 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:40.421 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2014421 00:20:40.421 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2014421 ']' 00:20:40.421 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.421 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.421 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.421 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.421 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.421 [2024-11-29 13:04:40.063001] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:40.421 [2024-11-29 13:04:40.063052] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.421 [2024-11-29 13:04:40.129520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.421 [2024-11-29 13:04:40.168774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.421 [2024-11-29 13:04:40.168813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.421 [2024-11-29 13:04:40.168821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.421 [2024-11-29 13:04:40.168827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.421 [2024-11-29 13:04:40.168832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.421 [2024-11-29 13:04:40.169443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.680 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.680 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:40.680 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.680 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.680 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.680 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.680 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.NUzaKyQQ9e 00:20:40.680 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:40.680 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.NUzaKyQQ9e 00:20:40.681 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:40.681 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.681 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:40.681 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.681 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.NUzaKyQQ9e 00:20:40.681 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NUzaKyQQ9e 00:20:40.681 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:40.681 [2024-11-29 13:04:40.482931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.939 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:40.939 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:41.199 [2024-11-29 13:04:40.875936] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.199 [2024-11-29 13:04:40.876159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.199 13:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:41.458 malloc0 00:20:41.458 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:41.458 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NUzaKyQQ9e 00:20:41.716 [2024-11-29 13:04:41.437463] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NUzaKyQQ9e': 0100666 00:20:41.716 [2024-11-29 13:04:41.437489] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:41.716 request: 00:20:41.716 { 00:20:41.716 "name": "key0", 00:20:41.716 "path": "/tmp/tmp.NUzaKyQQ9e", 00:20:41.716 "method": "keyring_file_add_key", 00:20:41.716 "req_id": 1 00:20:41.716 } 00:20:41.716 Got JSON-RPC error response 00:20:41.716 response: 00:20:41.716 { 00:20:41.716 "code": -1, 00:20:41.716 "message": "Operation not permitted" 00:20:41.716 } 00:20:41.716 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:41.975 [2024-11-29 13:04:41.617958] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:41.975 [2024-11-29 13:04:41.617990] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:41.975 request: 00:20:41.975 { 00:20:41.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.975 "host": "nqn.2016-06.io.spdk:host1", 00:20:41.975 "psk": "key0", 00:20:41.975 "method": "nvmf_subsystem_add_host", 00:20:41.975 "req_id": 1 00:20:41.975 } 00:20:41.975 Got JSON-RPC error response 00:20:41.975 response: 00:20:41.975 { 00:20:41.975 "code": -32603, 00:20:41.975 "message": "Internal error" 00:20:41.975 } 00:20:41.975 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:41.975 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:41.975 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:41.975 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:41.975 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2014421 00:20:41.975 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2014421 ']' 00:20:41.975 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2014421 00:20:41.975 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:41.975 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.975 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2014421 00:20:41.975 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:41.976 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:41.976 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2014421' 00:20:41.976 killing process with pid 2014421 00:20:41.976 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2014421 00:20:41.976 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2014421 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.NUzaKyQQ9e 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2014704 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2014704 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2014704 ']' 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.235 13:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.235 [2024-11-29 13:04:41.928867] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:42.235 [2024-11-29 13:04:41.928915] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.235 [2024-11-29 13:04:41.996774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.235 [2024-11-29 13:04:42.039235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.235 [2024-11-29 13:04:42.039273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.235 [2024-11-29 13:04:42.039281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.235 [2024-11-29 13:04:42.039288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.235 [2024-11-29 13:04:42.039293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.235 [2024-11-29 13:04:42.039869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.494 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.494 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:42.494 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:42.494 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.494 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.494 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.494 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.NUzaKyQQ9e 00:20:42.494 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NUzaKyQQ9e 00:20:42.494 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:42.752 [2024-11-29 13:04:42.346091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.752 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:42.752 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:43.011 [2024-11-29 13:04:42.719057] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.011 [2024-11-29 13:04:42.719270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.011 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:43.270 malloc0 00:20:43.270 13:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:43.529 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NUzaKyQQ9e 00:20:43.529 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:43.788 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2015072 00:20:43.788 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.788 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.788 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2015072 /var/tmp/bdevperf.sock 00:20:43.788 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2015072 ']' 00:20:43.788 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.788 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.788 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.788 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.788 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.788 [2024-11-29 13:04:43.543280] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:43.788 [2024-11-29 13:04:43.543329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015072 ] 00:20:43.788 [2024-11-29 13:04:43.602053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.047 [2024-11-29 13:04:43.645462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.047 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.047 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:44.047 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NUzaKyQQ9e 00:20:44.306 13:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:44.306 [2024-11-29 13:04:44.102851] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.565 TLSTESTn1 00:20:44.565 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:44.825 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:44.825 "subsystems": [ 00:20:44.825 { 00:20:44.825 "subsystem": "keyring", 00:20:44.825 "config": [ 00:20:44.825 { 00:20:44.825 "method": "keyring_file_add_key", 00:20:44.825 "params": { 00:20:44.825 "name": "key0", 00:20:44.825 "path": "/tmp/tmp.NUzaKyQQ9e" 00:20:44.825 } 00:20:44.825 } 00:20:44.825 ] 00:20:44.825 }, 00:20:44.825 { 00:20:44.825 "subsystem": "iobuf", 00:20:44.825 "config": [ 00:20:44.825 { 00:20:44.825 "method": "iobuf_set_options", 00:20:44.825 "params": { 00:20:44.825 "small_pool_count": 8192, 00:20:44.825 "large_pool_count": 1024, 00:20:44.825 "small_bufsize": 8192, 00:20:44.825 "large_bufsize": 135168, 00:20:44.825 "enable_numa": false 00:20:44.825 } 00:20:44.825 } 00:20:44.825 ] 00:20:44.825 }, 00:20:44.825 { 00:20:44.825 "subsystem": "sock", 00:20:44.825 "config": [ 00:20:44.825 { 00:20:44.825 "method": "sock_set_default_impl", 00:20:44.825 "params": { 00:20:44.825 "impl_name": "posix" 00:20:44.825 } 00:20:44.825 }, 00:20:44.825 { 00:20:44.825 "method": "sock_impl_set_options", 00:20:44.825 "params": { 00:20:44.825 "impl_name": "ssl", 00:20:44.825 "recv_buf_size": 4096, 00:20:44.825 "send_buf_size": 4096, 00:20:44.825 "enable_recv_pipe": true, 00:20:44.825 "enable_quickack": false, 00:20:44.825 "enable_placement_id": 0, 00:20:44.825 "enable_zerocopy_send_server": true, 00:20:44.825 "enable_zerocopy_send_client": false, 00:20:44.825 "zerocopy_threshold": 0, 00:20:44.825 "tls_version": 0, 00:20:44.825 "enable_ktls": false 00:20:44.825 } 00:20:44.825 }, 00:20:44.825 { 00:20:44.825 "method": "sock_impl_set_options", 00:20:44.825 "params": { 00:20:44.825 "impl_name": "posix", 00:20:44.825 "recv_buf_size": 2097152, 00:20:44.825 "send_buf_size": 2097152, 00:20:44.825 "enable_recv_pipe": true, 00:20:44.825 "enable_quickack": false, 00:20:44.825 "enable_placement_id": 0, 00:20:44.825 "enable_zerocopy_send_server": true, 00:20:44.825 "enable_zerocopy_send_client": false, 00:20:44.825 "zerocopy_threshold": 0, 00:20:44.825 "tls_version": 0, 00:20:44.825 "enable_ktls": false 00:20:44.825 } 00:20:44.825 } 00:20:44.825 ] 00:20:44.825 }, 00:20:44.825 { 00:20:44.825 "subsystem": "vmd", 00:20:44.825 "config": [] 00:20:44.825 }, 00:20:44.825 { 00:20:44.825 "subsystem": "accel", 00:20:44.825 "config": [ 00:20:44.825 { 00:20:44.825 "method": "accel_set_options", 00:20:44.825 "params": { 00:20:44.825 "small_cache_size": 128, 00:20:44.825 "large_cache_size": 16, 00:20:44.825 "task_count": 2048, 00:20:44.825 "sequence_count": 2048, 00:20:44.825 "buf_count": 2048 00:20:44.825 } 00:20:44.825 } 00:20:44.825 ] 00:20:44.825 }, 00:20:44.825 { 00:20:44.825 "subsystem": "bdev", 00:20:44.825 "config": [ 00:20:44.825 { 00:20:44.825 "method": "bdev_set_options", 00:20:44.826 "params": { 00:20:44.826 "bdev_io_pool_size": 65535, 00:20:44.826 "bdev_io_cache_size": 256, 00:20:44.826 "bdev_auto_examine": true, 00:20:44.826 "iobuf_small_cache_size": 128, 00:20:44.826 "iobuf_large_cache_size": 16 00:20:44.826 } 00:20:44.826 }, 00:20:44.826 { 00:20:44.826 "method": "bdev_raid_set_options", 00:20:44.826 "params": { 00:20:44.826 "process_window_size_kb": 1024, 00:20:44.826 "process_max_bandwidth_mb_sec": 0 00:20:44.826 } 00:20:44.826 }, 00:20:44.826 { 00:20:44.826 "method": "bdev_iscsi_set_options", 00:20:44.826 "params": { 00:20:44.826 "timeout_sec": 30 00:20:44.826 } 00:20:44.826 }, 00:20:44.826 { 00:20:44.826 "method": "bdev_nvme_set_options", 00:20:44.826 "params": { 00:20:44.826 "action_on_timeout": "none", 00:20:44.826 "timeout_us": 0, 00:20:44.826 "timeout_admin_us": 0, 00:20:44.826 "keep_alive_timeout_ms": 10000, 00:20:44.826 "arbitration_burst": 0, 00:20:44.826 "low_priority_weight": 0, 00:20:44.826 "medium_priority_weight": 0, 00:20:44.826 "high_priority_weight": 0, 00:20:44.826 "nvme_adminq_poll_period_us": 10000, 00:20:44.826 "nvme_ioq_poll_period_us": 0, 00:20:44.826 "io_queue_requests": 0, 00:20:44.826 "delay_cmd_submit": true, 00:20:44.826 "transport_retry_count": 4, 00:20:44.826 "bdev_retry_count": 3, 00:20:44.826 "transport_ack_timeout": 0, 00:20:44.826 "ctrlr_loss_timeout_sec": 0, 00:20:44.826 "reconnect_delay_sec": 0, 00:20:44.826 "fast_io_fail_timeout_sec": 0, 00:20:44.826 "disable_auto_failback": false, 00:20:44.826 "generate_uuids": false, 00:20:44.826 "transport_tos": 0, 00:20:44.826 "nvme_error_stat": false, 00:20:44.826 "rdma_srq_size": 0, 00:20:44.826 "io_path_stat": false, 00:20:44.826 "allow_accel_sequence": false, 00:20:44.826 "rdma_max_cq_size": 0, 00:20:44.826 "rdma_cm_event_timeout_ms": 0, 00:20:44.826 "dhchap_digests": [ 00:20:44.826 "sha256", 00:20:44.826 "sha384", 00:20:44.826 "sha512" 00:20:44.826 ], 00:20:44.826 "dhchap_dhgroups": [ 00:20:44.826 "null", 00:20:44.826 "ffdhe2048", 00:20:44.826 "ffdhe3072", 00:20:44.826 "ffdhe4096", 00:20:44.826 "ffdhe6144", 00:20:44.826 "ffdhe8192" 00:20:44.826 ] 00:20:44.826 } 00:20:44.826 }, 00:20:44.826 { 00:20:44.826 "method": "bdev_nvme_set_hotplug", 00:20:44.826 "params": { 00:20:44.826 "period_us": 100000, 00:20:44.826 "enable": false 00:20:44.826 } 00:20:44.826 }, 00:20:44.826 { 00:20:44.826 "method": "bdev_malloc_create", 00:20:44.826 "params": { 00:20:44.826 "name": "malloc0", 00:20:44.826 "num_blocks": 8192, 00:20:44.826 "block_size": 4096, 00:20:44.826 "physical_block_size": 4096, 00:20:44.826 "uuid": "48c23d71-db26-4f74-aa10-e54fec911de5", 00:20:44.826 "optimal_io_boundary": 0, 00:20:44.826 "md_size": 0, 00:20:44.826 "dif_type": 0, 00:20:44.826 "dif_is_head_of_md": false, 00:20:44.826 "dif_pi_format": 0 00:20:44.826 } 00:20:44.826 }, 00:20:44.826 { 00:20:44.826 "method": "bdev_wait_for_examine" 00:20:44.826 } 00:20:44.826 ] 00:20:44.826 }, 00:20:44.826 { 00:20:44.826 "subsystem": "nbd", 00:20:44.826 "config": [] 00:20:44.826 }, 00:20:44.826 { 00:20:44.826 "subsystem": "scheduler", 00:20:44.826 "config": [ 00:20:44.826 { 00:20:44.826 "method": "framework_set_scheduler", 00:20:44.826 "params": { 00:20:44.826 "name": "static" 00:20:44.826 } 00:20:44.826 } 00:20:44.826 ] 00:20:44.826 }, 00:20:44.826 { 00:20:44.826 "subsystem": "nvmf", 00:20:44.826 "config": [ 00:20:44.826 { 00:20:44.826 "method": "nvmf_set_config", 00:20:44.826 "params": { 00:20:44.826 "discovery_filter": "match_any", 00:20:44.826 "admin_cmd_passthru": { 00:20:44.826 "identify_ctrlr": false 00:20:44.826 }, 00:20:44.826 "dhchap_digests": [ 00:20:44.826 "sha256", 00:20:44.826 "sha384", 00:20:44.826 "sha512" 00:20:44.826 ], 00:20:44.826 "dhchap_dhgroups": [ 00:20:44.826 "null", 00:20:44.826 "ffdhe2048", 00:20:44.826 "ffdhe3072", 00:20:44.826 "ffdhe4096", 00:20:44.826 "ffdhe6144", 00:20:44.826 "ffdhe8192" 00:20:44.826 ] 00:20:44.826 } 00:20:44.826 }, 00:20:44.826 { 00:20:44.826 "method": "nvmf_set_max_subsystems", 00:20:44.826 "params": { 00:20:44.826 "max_subsystems": 1024 00:20:44.826 } 00:20:44.826 }, 00:20:44.826 { 00:20:44.826 "method": "nvmf_set_crdt", 00:20:44.826 "params": { 00:20:44.826 "crdt1": 0, 00:20:44.826 "crdt2": 0, 00:20:44.826 "crdt3": 0 00:20:44.826 } 00:20:44.826 }, 00:20:44.826 { 00:20:44.826 "method": "nvmf_create_transport", 00:20:44.826 "params": { 00:20:44.826 "trtype": "TCP", 00:20:44.826 "max_queue_depth": 128, 00:20:44.826 "max_io_qpairs_per_ctrlr": 127, 00:20:44.826 "in_capsule_data_size": 4096, 00:20:44.826 "max_io_size": 131072, 00:20:44.826 "io_unit_size": 131072, 00:20:44.826 "max_aq_depth": 128, 00:20:44.826 "num_shared_buffers": 511, 00:20:44.826 "buf_cache_size": 4294967295, 00:20:44.826 "dif_insert_or_strip": false, 00:20:44.826 "zcopy": false, 00:20:44.826 "c2h_success": false, 00:20:44.826 "sock_priority": 0, 00:20:44.826 "abort_timeout_sec": 1, 00:20:44.826 "ack_timeout": 0, 00:20:44.826 "data_wr_pool_size": 0 00:20:44.826 } 00:20:44.826 }, 00:20:44.826 { 00:20:44.827 "method": "nvmf_create_subsystem", 00:20:44.827 "params": { 00:20:44.827 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.827 "allow_any_host": false, 00:20:44.827 "serial_number": "SPDK00000000000001", 00:20:44.827 "model_number": "SPDK bdev Controller", 00:20:44.827 "max_namespaces": 10, 00:20:44.827 "min_cntlid": 1, 00:20:44.827 "max_cntlid": 65519, 00:20:44.827 "ana_reporting": false 00:20:44.827 } 00:20:44.827 }, 00:20:44.827 { 00:20:44.827 "method": "nvmf_subsystem_add_host", 00:20:44.827 "params": { 00:20:44.827 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.827 "host": "nqn.2016-06.io.spdk:host1", 00:20:44.827 "psk": "key0" 00:20:44.827 } 00:20:44.827 }, 00:20:44.827 { 00:20:44.827 "method": "nvmf_subsystem_add_ns", 00:20:44.827 "params": { 00:20:44.827 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.827 "namespace": { 00:20:44.827 "nsid": 1, 00:20:44.827 "bdev_name": "malloc0", 00:20:44.827 "nguid": "48C23D71DB264F74AA10E54FEC911DE5", 00:20:44.827 "uuid": "48c23d71-db26-4f74-aa10-e54fec911de5", 00:20:44.827 "no_auto_visible": false 00:20:44.827 } 00:20:44.827 } 00:20:44.827 }, 00:20:44.827 { 00:20:44.827 "method": "nvmf_subsystem_add_listener", 00:20:44.827 "params": { 00:20:44.827 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.827 "listen_address": { 00:20:44.827 "trtype": "TCP", 00:20:44.827 "adrfam": "IPv4", 00:20:44.827 "traddr": "10.0.0.2", 00:20:44.827 "trsvcid": "4420" 00:20:44.827 }, 00:20:44.827 "secure_channel": true 00:20:44.827 } 00:20:44.827 } 00:20:44.827 ] 00:20:44.827 } 00:20:44.827 ] 00:20:44.827 }' 00:20:44.827 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:45.085 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:45.085 "subsystems": [ 00:20:45.085 { 00:20:45.085 "subsystem": "keyring", 00:20:45.085 "config": [ 00:20:45.085 { 00:20:45.085 "method": "keyring_file_add_key", 00:20:45.085 "params": { 00:20:45.085 "name": "key0", 00:20:45.085 "path": "/tmp/tmp.NUzaKyQQ9e" 00:20:45.085 } 00:20:45.085 } 00:20:45.085 ] 00:20:45.085 }, 00:20:45.085 { 00:20:45.085 "subsystem": "iobuf", 00:20:45.085 "config": [ 00:20:45.086 { 00:20:45.086 "method": "iobuf_set_options", 00:20:45.086 "params": { 00:20:45.086 "small_pool_count": 8192, 00:20:45.086 "large_pool_count": 1024, 00:20:45.086 "small_bufsize": 8192, 00:20:45.086 "large_bufsize": 135168, 00:20:45.086 "enable_numa": false 00:20:45.086 } 00:20:45.086 } 00:20:45.086 ] 00:20:45.086 }, 00:20:45.086 { 00:20:45.086 "subsystem": "sock", 00:20:45.086 "config": [ 00:20:45.086 { 00:20:45.086 "method": "sock_set_default_impl", 00:20:45.086 "params": { 00:20:45.086 "impl_name": "posix" 00:20:45.086 } 00:20:45.086 }, 00:20:45.086 { 00:20:45.086 "method": "sock_impl_set_options", 00:20:45.086 "params": { 00:20:45.086 "impl_name": "ssl", 00:20:45.086 "recv_buf_size": 4096, 00:20:45.086 "send_buf_size": 4096, 00:20:45.086 "enable_recv_pipe": true, 00:20:45.086 "enable_quickack": false, 00:20:45.086 "enable_placement_id": 0, 00:20:45.086 "enable_zerocopy_send_server": true, 00:20:45.086 "enable_zerocopy_send_client": false, 00:20:45.086 "zerocopy_threshold": 0, 00:20:45.086 "tls_version": 0, 00:20:45.086 "enable_ktls": false 00:20:45.086 } 00:20:45.086 }, 00:20:45.086 { 00:20:45.086 "method": "sock_impl_set_options", 00:20:45.086 "params": { 00:20:45.086 "impl_name": "posix", 00:20:45.086 "recv_buf_size": 2097152, 00:20:45.086 "send_buf_size": 2097152, 00:20:45.086 "enable_recv_pipe": true, 00:20:45.086 "enable_quickack": false, 00:20:45.086 "enable_placement_id": 0, 00:20:45.086 "enable_zerocopy_send_server": true, 00:20:45.086 "enable_zerocopy_send_client": false, 00:20:45.086 "zerocopy_threshold": 0, 00:20:45.086 "tls_version": 0, 00:20:45.086 "enable_ktls": false 00:20:45.086 } 00:20:45.086 } 00:20:45.086 ] 00:20:45.086 }, 00:20:45.086 { 00:20:45.086 "subsystem": "vmd", 00:20:45.086 "config": [] 00:20:45.086 }, 00:20:45.086 { 00:20:45.086 "subsystem": "accel", 00:20:45.086 "config": [ 00:20:45.086 { 00:20:45.086 "method": "accel_set_options", 00:20:45.086 "params": { 00:20:45.086 "small_cache_size": 128, 00:20:45.086 "large_cache_size": 16, 00:20:45.086 "task_count": 2048, 00:20:45.086 "sequence_count": 2048, 00:20:45.086 "buf_count": 2048 00:20:45.086 } 00:20:45.086 } 00:20:45.086 ] 00:20:45.086 }, 00:20:45.086 { 00:20:45.086 "subsystem": "bdev", 00:20:45.086 "config": [ 00:20:45.086 { 00:20:45.086 "method": "bdev_set_options", 00:20:45.086 "params": { 00:20:45.086 "bdev_io_pool_size": 65535, 00:20:45.086 "bdev_io_cache_size": 256, 00:20:45.086 "bdev_auto_examine": true, 00:20:45.086 "iobuf_small_cache_size": 128, 00:20:45.086 "iobuf_large_cache_size": 16 00:20:45.086 } 00:20:45.086 }, 00:20:45.086 { 00:20:45.086 "method": "bdev_raid_set_options", 00:20:45.086 "params": { 00:20:45.086 "process_window_size_kb": 1024, 00:20:45.086 "process_max_bandwidth_mb_sec": 0 00:20:45.086 } 00:20:45.086 }, 00:20:45.086 { 00:20:45.086 "method": "bdev_iscsi_set_options", 00:20:45.086 "params": { 00:20:45.086 "timeout_sec": 30 00:20:45.086 } 00:20:45.086 }, 00:20:45.086 { 00:20:45.086 "method": "bdev_nvme_set_options", 00:20:45.086 "params": { 00:20:45.086 "action_on_timeout": "none", 00:20:45.086 "timeout_us": 0, 00:20:45.086 "timeout_admin_us": 0, 00:20:45.086 "keep_alive_timeout_ms": 10000, 00:20:45.086 "arbitration_burst": 0, 00:20:45.086 "low_priority_weight": 0, 00:20:45.086 "medium_priority_weight": 0, 00:20:45.086 "high_priority_weight": 0, 00:20:45.086 "nvme_adminq_poll_period_us": 10000, 00:20:45.086 "nvme_ioq_poll_period_us": 0, 00:20:45.086 "io_queue_requests": 512, 00:20:45.086 "delay_cmd_submit": true, 00:20:45.086 "transport_retry_count": 4, 00:20:45.086 "bdev_retry_count": 3, 00:20:45.086 "transport_ack_timeout": 0, 00:20:45.086 "ctrlr_loss_timeout_sec": 0, 00:20:45.086 "reconnect_delay_sec": 0, 00:20:45.086 "fast_io_fail_timeout_sec": 0, 00:20:45.086 "disable_auto_failback": false, 00:20:45.086 "generate_uuids": false, 00:20:45.086 "transport_tos": 0, 00:20:45.086 "nvme_error_stat": false, 00:20:45.086 "rdma_srq_size": 0, 00:20:45.086 "io_path_stat": false, 00:20:45.086 "allow_accel_sequence": false, 00:20:45.086 "rdma_max_cq_size": 0, 00:20:45.086 "rdma_cm_event_timeout_ms": 0, 00:20:45.086 "dhchap_digests": [ 00:20:45.086 "sha256", 00:20:45.086 "sha384", 00:20:45.086 "sha512" 00:20:45.086 ], 00:20:45.086 "dhchap_dhgroups": [ 00:20:45.086 "null", 00:20:45.086 "ffdhe2048", 00:20:45.086 "ffdhe3072", 00:20:45.086 "ffdhe4096", 00:20:45.086 "ffdhe6144", 00:20:45.086 "ffdhe8192" 00:20:45.086 ] 00:20:45.086 } 00:20:45.086 }, 00:20:45.086 { 00:20:45.086 "method": "bdev_nvme_attach_controller", 00:20:45.086 "params": { 00:20:45.086 "name": "TLSTEST", 00:20:45.086 "trtype": "TCP", 00:20:45.086 "adrfam": "IPv4", 00:20:45.086 "traddr": "10.0.0.2", 00:20:45.086 "trsvcid": "4420", 00:20:45.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.086 "prchk_reftag": false, 00:20:45.086 "prchk_guard": false, 00:20:45.086 "ctrlr_loss_timeout_sec": 0, 00:20:45.086 "reconnect_delay_sec": 0, 00:20:45.086 "fast_io_fail_timeout_sec": 0, 00:20:45.086 "psk": "key0", 00:20:45.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.086 "hdgst": false, 00:20:45.086 "ddgst": false, 00:20:45.086 "multipath": "multipath" 00:20:45.086 } 00:20:45.086 }, 00:20:45.086 { 00:20:45.086 "method": "bdev_nvme_set_hotplug", 00:20:45.086 "params": { 00:20:45.086 "period_us": 100000, 00:20:45.086 "enable": false 00:20:45.086 } 00:20:45.086 }, 00:20:45.086 { 00:20:45.086 "method": "bdev_wait_for_examine" 00:20:45.086 } 00:20:45.086 ] 00:20:45.086 }, 00:20:45.086 { 00:20:45.086 "subsystem": "nbd", 00:20:45.086 "config": [] 00:20:45.086 } 00:20:45.086 ] 00:20:45.086 }' 00:20:45.086 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2015072 00:20:45.086 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2015072 ']' 00:20:45.086 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2015072 00:20:45.086 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.086 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.086 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2015072 00:20:45.086 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:45.086 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:45.086 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2015072' 00:20:45.086 killing process with pid 2015072 00:20:45.086 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2015072 00:20:45.086 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.086 00:20:45.086 Latency(us) 00:20:45.086 [2024-11-29T12:04:44.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.086 [2024-11-29T12:04:44.906Z] =================================================================================================================== 00:20:45.086 [2024-11-29T12:04:44.906Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.086 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2015072 00:20:45.345 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2014704 00:20:45.345 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2014704 ']' 00:20:45.345 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2014704 00:20:45.345 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.345 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.345 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2014704 00:20:45.345 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:45.345 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:45.345 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2014704' 00:20:45.345 killing process with pid 2014704 00:20:45.345 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2014704 00:20:45.345 13:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2014704 00:20:45.604 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:45.604 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.604 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.604 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:45.604 "subsystems": [ 00:20:45.604 { 00:20:45.604 "subsystem": "keyring", 00:20:45.604 "config": [ 00:20:45.604 { 00:20:45.604 "method": "keyring_file_add_key", 00:20:45.604 "params": { 00:20:45.604 "name": "key0", 00:20:45.604 "path": "/tmp/tmp.NUzaKyQQ9e" 00:20:45.604 } 00:20:45.604 } 00:20:45.604 ] 00:20:45.604 }, 00:20:45.604 { 00:20:45.604 "subsystem": "iobuf", 00:20:45.604 "config": [ 00:20:45.604 { 00:20:45.604 "method": "iobuf_set_options", 00:20:45.604 "params": { 00:20:45.604 "small_pool_count": 8192, 00:20:45.604 "large_pool_count": 1024, 00:20:45.604 "small_bufsize": 8192, 00:20:45.604 "large_bufsize": 135168, 00:20:45.604 "enable_numa": false 00:20:45.604 } 00:20:45.604 } 00:20:45.604 ] 00:20:45.604 }, 00:20:45.604 { 00:20:45.604 "subsystem": "sock", 00:20:45.604 "config": [ 00:20:45.604 { 00:20:45.604 "method": "sock_set_default_impl", 00:20:45.604 "params": { 00:20:45.604 "impl_name": "posix" 00:20:45.604 } 00:20:45.604 }, 00:20:45.604 { 00:20:45.604 "method": "sock_impl_set_options", 00:20:45.604 "params": { 00:20:45.604 "impl_name": "ssl", 00:20:45.604 "recv_buf_size": 4096, 00:20:45.604 "send_buf_size": 4096, 00:20:45.604 "enable_recv_pipe": true, 00:20:45.604 "enable_quickack": false, 00:20:45.604 "enable_placement_id": 0, 00:20:45.604 "enable_zerocopy_send_server": true, 00:20:45.604 "enable_zerocopy_send_client": false, 00:20:45.604 "zerocopy_threshold": 0, 00:20:45.604 "tls_version": 0, 00:20:45.604 "enable_ktls": false 00:20:45.604 } 00:20:45.604 }, 00:20:45.604 { 00:20:45.604 "method": "sock_impl_set_options", 00:20:45.604 "params": { 00:20:45.604 "impl_name": "posix", 00:20:45.604 "recv_buf_size": 2097152, 00:20:45.604 "send_buf_size": 2097152, 00:20:45.604 "enable_recv_pipe": true, 00:20:45.604 "enable_quickack": false, 00:20:45.604 "enable_placement_id": 0, 00:20:45.604 "enable_zerocopy_send_server": true, 00:20:45.604 "enable_zerocopy_send_client": false, 00:20:45.604 "zerocopy_threshold": 0, 00:20:45.604 "tls_version": 0, 00:20:45.604 "enable_ktls": false 00:20:45.604 } 00:20:45.604 } 00:20:45.604 ] 00:20:45.604 }, 00:20:45.604 { 00:20:45.604 "subsystem": "vmd", 00:20:45.604 "config": [] 00:20:45.604 }, 00:20:45.604 { 00:20:45.604 "subsystem": "accel", 00:20:45.604 "config": [ 00:20:45.604 { 00:20:45.604 "method": "accel_set_options", 00:20:45.604 "params": { 00:20:45.604 "small_cache_size": 128, 00:20:45.604 "large_cache_size": 16, 00:20:45.604 "task_count": 2048, 00:20:45.604 "sequence_count": 2048, 00:20:45.604 "buf_count": 2048 00:20:45.604 } 00:20:45.604 } 00:20:45.604 ] 00:20:45.604 }, 00:20:45.604 { 00:20:45.604 "subsystem": "bdev", 00:20:45.604 "config": [ 00:20:45.604 { 00:20:45.604 "method": "bdev_set_options", 00:20:45.604 "params": { 00:20:45.604 "bdev_io_pool_size": 65535, 00:20:45.604 "bdev_io_cache_size": 256, 00:20:45.604 "bdev_auto_examine": true, 00:20:45.604 "iobuf_small_cache_size": 128, 00:20:45.604 "iobuf_large_cache_size": 16 00:20:45.604 } 00:20:45.604 }, 00:20:45.604 { 00:20:45.604 "method": "bdev_raid_set_options", 00:20:45.604 "params": { 00:20:45.604 "process_window_size_kb": 1024, 00:20:45.604 "process_max_bandwidth_mb_sec": 0 00:20:45.604 } 00:20:45.604 }, 00:20:45.604 { 00:20:45.604 "method": "bdev_iscsi_set_options", 00:20:45.604 "params": { 00:20:45.604 "timeout_sec": 30 00:20:45.604 } 00:20:45.604 }, 00:20:45.604 { 00:20:45.604 "method": "bdev_nvme_set_options", 00:20:45.604 "params": { 00:20:45.604 "action_on_timeout": "none", 00:20:45.604 "timeout_us": 0, 00:20:45.604 "timeout_admin_us": 0, 00:20:45.604 "keep_alive_timeout_ms": 10000, 00:20:45.604 "arbitration_burst": 0, 00:20:45.604 "low_priority_weight": 0, 00:20:45.604 "medium_priority_weight": 0, 00:20:45.604 "high_priority_weight": 0, 00:20:45.604 "nvme_adminq_poll_period_us": 10000, 00:20:45.604 "nvme_ioq_poll_period_us": 0, 00:20:45.604 "io_queue_requests": 0, 00:20:45.604 "delay_cmd_submit": true, 00:20:45.604 "transport_retry_count": 4, 00:20:45.604 "bdev_retry_count": 3, 00:20:45.604 "transport_ack_timeout": 0, 00:20:45.604 "ctrlr_loss_timeout_sec": 0, 00:20:45.604 "reconnect_delay_sec": 0, 00:20:45.604 "fast_io_fail_timeout_sec": 0, 00:20:45.604 "disable_auto_failback": false, 00:20:45.604 "generate_uuids": false, 00:20:45.604 "transport_tos": 0, 00:20:45.604 "nvme_error_stat": false, 00:20:45.604 "rdma_srq_size": 0, 00:20:45.604 "io_path_stat": false, 00:20:45.604 "allow_accel_sequence": false, 00:20:45.604 "rdma_max_cq_size": 0, 00:20:45.604 "rdma_cm_event_timeout_ms": 0, 00:20:45.604 "dhchap_digests": [ 00:20:45.604 "sha256", 00:20:45.604 "sha384", 00:20:45.604 "sha512" 00:20:45.604 ], 00:20:45.604 "dhchap_dhgroups": [ 00:20:45.604 "null", 00:20:45.604 "ffdhe2048", 00:20:45.604 "ffdhe3072", 00:20:45.604 "ffdhe4096", 00:20:45.605 "ffdhe6144", 00:20:45.605 "ffdhe8192" 00:20:45.605 ] 00:20:45.605 } 00:20:45.605 }, 00:20:45.605 { 00:20:45.605 "method": "bdev_nvme_set_hotplug", 00:20:45.605 "params": { 00:20:45.605 "period_us": 100000, 00:20:45.605 "enable": false 00:20:45.605 } 00:20:45.605 }, 00:20:45.605 { 00:20:45.605 "method": "bdev_malloc_create", 00:20:45.605 "params": { 00:20:45.605 "name": "malloc0", 00:20:45.605 "num_blocks": 8192, 00:20:45.605 "block_size": 4096, 00:20:45.605 "physical_block_size": 4096, 00:20:45.605 "uuid": "48c23d71-db26-4f74-aa10-e54fec911de5", 00:20:45.605 "optimal_io_boundary": 0, 00:20:45.605 "md_size": 0, 00:20:45.605 "dif_type": 0, 00:20:45.605 "dif_is_head_of_md": false, 00:20:45.605 "dif_pi_format": 0 00:20:45.605 } 00:20:45.605 }, 00:20:45.605 { 00:20:45.605 "method": "bdev_wait_for_examine" 00:20:45.605 } 00:20:45.605 ] 00:20:45.605 }, 00:20:45.605 { 00:20:45.605 "subsystem": "nbd", 00:20:45.605 "config": [] 00:20:45.605 }, 00:20:45.605 { 00:20:45.605 "subsystem": "scheduler", 00:20:45.605 "config": [ 00:20:45.605 { 00:20:45.605 "method": "framework_set_scheduler", 00:20:45.605 "params": { 00:20:45.605 "name": "static" 00:20:45.605 } 00:20:45.605 } 00:20:45.605 ] 00:20:45.605 }, 00:20:45.605 { 00:20:45.605 "subsystem": "nvmf", 00:20:45.605 "config": [ 00:20:45.605 { 00:20:45.605 "method": "nvmf_set_config", 00:20:45.605 "params": { 00:20:45.605 "discovery_filter": "match_any", 00:20:45.605 "admin_cmd_passthru": { 00:20:45.605 "identify_ctrlr": false 00:20:45.605 }, 00:20:45.605 "dhchap_digests": [ 00:20:45.605 "sha256", 00:20:45.605 "sha384", 00:20:45.605 "sha512" 00:20:45.605 ], 00:20:45.605 "dhchap_dhgroups": [ 00:20:45.605 "null", 00:20:45.605 "ffdhe2048", 00:20:45.605 "ffdhe3072", 00:20:45.605 "ffdhe4096", 00:20:45.605 "ffdhe6144", 00:20:45.605 "ffdhe8192" 00:20:45.605 ] 00:20:45.605 } 00:20:45.605 }, 00:20:45.605 { 00:20:45.605 "method": "nvmf_set_max_subsystems", 00:20:45.605 "params": { 00:20:45.605 "max_subsystems": 1024 00:20:45.605 } 00:20:45.605 }, 00:20:45.605 { 00:20:45.605 "method": "nvmf_set_crdt", 00:20:45.605 "params": { 00:20:45.605 "crdt1": 0, 00:20:45.605 "crdt2": 0, 00:20:45.605 "crdt3": 0 00:20:45.605 } 00:20:45.605 }, 00:20:45.605 { 00:20:45.605 "method": "nvmf_create_transport", 00:20:45.605 "params": { 00:20:45.605 "trtype": "TCP", 00:20:45.605 "max_queue_depth": 128, 00:20:45.605 "max_io_qpairs_per_ctrlr": 127, 00:20:45.605 "in_capsule_data_size": 4096, 00:20:45.605 "max_io_size": 131072, 00:20:45.605 "io_unit_size": 131072, 00:20:45.605 "max_aq_depth": 128, 00:20:45.605 "num_shared_buffers": 511, 00:20:45.605 "buf_cache_size": 4294967295, 00:20:45.605 "dif_insert_or_strip": false, 00:20:45.605 "zcopy": false, 00:20:45.605 "c2h_success": false, 00:20:45.605 "sock_priority": 0, 00:20:45.605 "abort_timeout_sec": 1, 00:20:45.605 "ack_timeout": 0, 00:20:45.605 "data_wr_pool_size": 0 00:20:45.605 } 00:20:45.605 }, 00:20:45.605 { 00:20:45.605 "method": "nvmf_create_subsystem", 00:20:45.605 "params": { 00:20:45.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.605 "allow_any_host": false, 00:20:45.605 "serial_number": "SPDK00000000000001", 00:20:45.605 "model_number": "SPDK bdev Controller", 00:20:45.605 "max_namespaces": 10, 00:20:45.605 "min_cntlid": 1, 00:20:45.605 "max_cntlid": 65519, 00:20:45.605 "ana_reporting": false 00:20:45.605 } 00:20:45.605 }, 00:20:45.605 { 00:20:45.605 "method": "nvmf_subsystem_add_host", 00:20:45.605 "params": { 00:20:45.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.605 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.605 "psk": "key0" 00:20:45.605 } 00:20:45.605 }, 00:20:45.605 { 00:20:45.605 "method": "nvmf_subsystem_add_ns", 00:20:45.605 "params": { 00:20:45.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.605 "namespace": { 00:20:45.605 "nsid": 1, 00:20:45.605 "bdev_name": "malloc0", 00:20:45.605 "nguid": "48C23D71DB264F74AA10E54FEC911DE5", 00:20:45.605 "uuid": "48c23d71-db26-4f74-aa10-e54fec911de5", 00:20:45.605 "no_auto_visible": false 00:20:45.605 } 00:20:45.605 } 00:20:45.605 }, 00:20:45.605 { 00:20:45.605 "method": "nvmf_subsystem_add_listener", 00:20:45.605 "params": { 00:20:45.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.605 "listen_address": { 00:20:45.605 "trtype": "TCP", 00:20:45.605 "adrfam": "IPv4", 00:20:45.605 "traddr": "10.0.0.2", 00:20:45.605 "trsvcid": "4420" 00:20:45.605 }, 00:20:45.605 "secure_channel": true 00:20:45.605 } 00:20:45.605 } 00:20:45.605 ] 00:20:45.605 } 00:20:45.605 ] 00:20:45.605 }' 00:20:45.605 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.605 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2015410 00:20:45.605 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:45.605 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2015410 00:20:45.605 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2015410 ']' 00:20:45.605 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.605 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.605 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.605 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.605 13:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.605 [2024-11-29 13:04:45.226229] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:45.605 [2024-11-29 13:04:45.226275] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.605 [2024-11-29 13:04:45.292551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.605 [2024-11-29 13:04:45.333507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.605 [2024-11-29 13:04:45.333543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.605 [2024-11-29 13:04:45.333550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.605 [2024-11-29 13:04:45.333556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.605 [2024-11-29 13:04:45.333561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.605 [2024-11-29 13:04:45.334213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.863 [2024-11-29 13:04:45.548642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.863 [2024-11-29 13:04:45.580674] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:45.863 [2024-11-29 13:04:45.580895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2015456 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2015456 /var/tmp/bdevperf.sock 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2015456 ']' 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.430 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:46.430 "subsystems": [ 00:20:46.430 { 00:20:46.430 "subsystem": "keyring", 00:20:46.430 "config": [ 00:20:46.430 { 00:20:46.430 "method": "keyring_file_add_key", 00:20:46.430 "params": { 00:20:46.430 "name": "key0", 00:20:46.430 "path": "/tmp/tmp.NUzaKyQQ9e" 00:20:46.430 } 00:20:46.430 } 00:20:46.430 ] 00:20:46.430 }, 00:20:46.430 { 00:20:46.430 "subsystem": "iobuf", 00:20:46.430 "config": [ 00:20:46.430 { 00:20:46.430 "method": "iobuf_set_options", 00:20:46.430 "params": { 00:20:46.430 "small_pool_count": 8192, 00:20:46.430 "large_pool_count": 1024, 00:20:46.430 "small_bufsize": 8192, 00:20:46.430 "large_bufsize": 135168, 00:20:46.430 "enable_numa": false 00:20:46.430 } 00:20:46.430 } 00:20:46.430 ] 00:20:46.430 }, 00:20:46.430 { 00:20:46.430 "subsystem": "sock", 00:20:46.430 "config": [ 00:20:46.430 { 00:20:46.430 "method": "sock_set_default_impl", 00:20:46.430 "params": { 00:20:46.430 "impl_name": "posix" 00:20:46.430 } 00:20:46.430 }, 00:20:46.430 { 00:20:46.430 "method": "sock_impl_set_options", 00:20:46.430 "params": { 00:20:46.430 "impl_name": "ssl", 00:20:46.430 "recv_buf_size": 4096, 00:20:46.430 "send_buf_size": 4096, 00:20:46.430 "enable_recv_pipe": true, 00:20:46.430 "enable_quickack": false, 00:20:46.430 "enable_placement_id": 0, 00:20:46.430 "enable_zerocopy_send_server": true, 00:20:46.430 "enable_zerocopy_send_client": false, 00:20:46.430 "zerocopy_threshold": 0, 00:20:46.430 "tls_version": 0, 00:20:46.430 "enable_ktls": false 00:20:46.430 } 00:20:46.430 }, 00:20:46.430 { 00:20:46.430 "method": "sock_impl_set_options", 00:20:46.430 "params": { 00:20:46.430 "impl_name": "posix", 00:20:46.430 "recv_buf_size": 2097152, 00:20:46.430 "send_buf_size": 2097152, 00:20:46.430 "enable_recv_pipe": true, 00:20:46.430 "enable_quickack": false, 00:20:46.430 "enable_placement_id": 0, 00:20:46.430 "enable_zerocopy_send_server": true, 00:20:46.430 "enable_zerocopy_send_client": false, 00:20:46.430 "zerocopy_threshold": 0, 00:20:46.430 "tls_version": 0, 00:20:46.430 "enable_ktls": false 00:20:46.430 } 00:20:46.430 } 00:20:46.430 ] 00:20:46.430 }, 00:20:46.430 { 00:20:46.430 "subsystem": "vmd", 00:20:46.430 "config": [] 00:20:46.430 }, 00:20:46.430 { 00:20:46.430 "subsystem": "accel", 00:20:46.430 "config": [ 00:20:46.430 { 00:20:46.430 "method": "accel_set_options", 00:20:46.430 "params": { 00:20:46.430 "small_cache_size": 128, 00:20:46.430 "large_cache_size": 16, 00:20:46.430 "task_count": 2048, 00:20:46.430 "sequence_count": 2048, 00:20:46.430 "buf_count": 2048 00:20:46.430 } 00:20:46.430 } 00:20:46.430 ] 00:20:46.430 }, 00:20:46.430 { 00:20:46.430 "subsystem": "bdev", 00:20:46.430 "config": [ 00:20:46.430 { 00:20:46.430 "method": "bdev_set_options", 00:20:46.430 "params": { 00:20:46.430 "bdev_io_pool_size": 65535, 00:20:46.430 "bdev_io_cache_size": 256, 00:20:46.430 "bdev_auto_examine": true, 00:20:46.430 "iobuf_small_cache_size": 128, 00:20:46.430 "iobuf_large_cache_size": 16 00:20:46.430 } 00:20:46.430 }, 00:20:46.430 { 00:20:46.430 "method": "bdev_raid_set_options", 00:20:46.430 "params": { 00:20:46.430 "process_window_size_kb": 1024, 00:20:46.430 "process_max_bandwidth_mb_sec": 0 00:20:46.430 } 00:20:46.430 }, 00:20:46.430 { 00:20:46.430 "method": "bdev_iscsi_set_options", 00:20:46.430 "params": { 00:20:46.430 "timeout_sec": 30 00:20:46.430 } 00:20:46.430 }, 00:20:46.430 { 00:20:46.430 "method": "bdev_nvme_set_options", 00:20:46.430 "params": { 00:20:46.430 "action_on_timeout": "none", 00:20:46.430 "timeout_us": 0, 00:20:46.430 "timeout_admin_us": 0, 00:20:46.430 "keep_alive_timeout_ms": 10000, 00:20:46.430 "arbitration_burst": 0, 00:20:46.430 "low_priority_weight": 0, 00:20:46.430 "medium_priority_weight": 0, 00:20:46.430 "high_priority_weight": 0, 00:20:46.430 "nvme_adminq_poll_period_us": 10000, 00:20:46.430 "nvme_ioq_poll_period_us": 0, 00:20:46.430 "io_queue_requests": 512, 00:20:46.430 "delay_cmd_submit": true, 00:20:46.430 "transport_retry_count": 4, 00:20:46.430 "bdev_retry_count": 3, 00:20:46.430 "transport_ack_timeout": 0, 00:20:46.430 "ctrlr_loss_timeout_sec": 0, 00:20:46.430 "reconnect_delay_sec": 0, 00:20:46.430 "fast_io_fail_timeout_sec": 0, 00:20:46.430 "disable_auto_failback": false, 00:20:46.430 "generate_uuids": false, 00:20:46.430 "transport_tos": 0, 00:20:46.431 "nvme_error_stat": false, 00:20:46.431 "rdma_srq_size": 0, 00:20:46.431 "io_path_stat": false, 00:20:46.431 "allow_accel_sequence": false, 00:20:46.431 "rdma_max_cq_size": 0, 00:20:46.431 "rdma_cm_event_timeout_ms": 0, 00:20:46.431 "dhchap_digests": [ 00:20:46.431 "sha256", 00:20:46.431 "sha384", 00:20:46.431 "sha512" 00:20:46.431 ], 00:20:46.431 "dhchap_dhgroups": [ 00:20:46.431 "null", 00:20:46.431 "ffdhe2048", 00:20:46.431 "ffdhe3072", 00:20:46.431 "ffdhe4096", 00:20:46.431 "ffdhe6144", 00:20:46.431 "ffdhe8192" 00:20:46.431 ] 00:20:46.431 } 00:20:46.431 }, 00:20:46.431 { 00:20:46.431 "method": "bdev_nvme_attach_controller", 00:20:46.431 "params": { 00:20:46.431 "name": "TLSTEST", 00:20:46.431 "trtype": "TCP", 00:20:46.431 "adrfam": "IPv4", 00:20:46.431 "traddr": "10.0.0.2", 00:20:46.431 "trsvcid": "4420", 00:20:46.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.431 "prchk_reftag": false, 00:20:46.431 "prchk_guard": false, 00:20:46.431 "ctrlr_loss_timeout_sec": 0, 00:20:46.431 "reconnect_delay_sec": 0, 00:20:46.431 "fast_io_fail_timeout_sec": 0, 00:20:46.431 "psk": "key0", 00:20:46.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.431 "hdgst": false, 00:20:46.431 "ddgst": false, 00:20:46.431 "multipath": "multipath" 00:20:46.431 } 00:20:46.431 }, 00:20:46.431 { 00:20:46.431 "method": "bdev_nvme_set_hotplug", 00:20:46.431 "params": { 00:20:46.431 "period_us": 100000, 00:20:46.431 "enable": false 00:20:46.431 } 00:20:46.431 }, 00:20:46.431 { 00:20:46.431 "method": "bdev_wait_for_examine" 00:20:46.431 } 00:20:46.431 ] 00:20:46.431 }, 00:20:46.431 { 00:20:46.431 "subsystem": "nbd", 00:20:46.431 "config": [] 00:20:46.431 } 00:20:46.431 ] 00:20:46.431 }' 00:20:46.431 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.431 [2024-11-29 13:04:46.143884] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:46.431 [2024-11-29 13:04:46.143933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015456 ] 00:20:46.431 [2024-11-29 13:04:46.202174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.431 [2024-11-29 13:04:46.244953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.690 [2024-11-29 13:04:46.397782] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.256 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.256 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:47.257 13:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:47.257 Running I/O for 10 seconds... 00:20:49.570 5306.00 IOPS, 20.73 MiB/s [2024-11-29T12:04:50.378Z] 5417.50 IOPS, 21.16 MiB/s [2024-11-29T12:04:51.380Z] 5434.33 IOPS, 21.23 MiB/s [2024-11-29T12:04:52.316Z] 5405.75 IOPS, 21.12 MiB/s [2024-11-29T12:04:53.253Z] 5403.80 IOPS, 21.11 MiB/s [2024-11-29T12:04:54.189Z] 5383.00 IOPS, 21.03 MiB/s [2024-11-29T12:04:55.125Z] 5401.57 IOPS, 21.10 MiB/s [2024-11-29T12:04:56.501Z] 5407.12 IOPS, 21.12 MiB/s [2024-11-29T12:04:57.439Z] 5418.33 IOPS, 21.17 MiB/s [2024-11-29T12:04:57.439Z] 5424.70 IOPS, 21.19 MiB/s 00:20:57.619 Latency(us) 00:20:57.619 [2024-11-29T12:04:57.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.619 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:57.619 Verification LBA range: start 0x0 length 0x2000 00:20:57.619 TLSTESTn1 : 10.02 5428.90 21.21 0.00 0.00 23540.48 6012.22 26100.42 00:20:57.619 [2024-11-29T12:04:57.439Z] =================================================================================================================== 00:20:57.619 [2024-11-29T12:04:57.439Z] Total : 5428.90 21.21 0.00 0.00 23540.48 6012.22 26100.42 00:20:57.619 { 00:20:57.619 "results": [ 00:20:57.619 { 00:20:57.619 "job": "TLSTESTn1", 00:20:57.619 "core_mask": "0x4", 00:20:57.619 "workload": "verify", 00:20:57.619 "status": "finished", 00:20:57.619 "verify_range": { 00:20:57.619 "start": 0, 00:20:57.619 "length": 8192 00:20:57.619 }, 00:20:57.619 "queue_depth": 128, 00:20:57.619 "io_size": 4096, 00:20:57.619 "runtime": 10.015283, 00:20:57.619 "iops": 5428.903007533587, 00:20:57.619 "mibps": 21.206652373178073, 00:20:57.619 "io_failed": 0, 00:20:57.619 "io_timeout": 0, 00:20:57.619 "avg_latency_us": 23540.48297646807, 00:20:57.619 "min_latency_us": 6012.215652173913, 00:20:57.619 "max_latency_us": 26100.424347826087 00:20:57.619 } 00:20:57.619 ], 00:20:57.619 "core_count": 1 00:20:57.619 } 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2015456 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2015456 ']' 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2015456 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2015456 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2015456' 00:20:57.619 killing process with pid 2015456 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2015456 00:20:57.619 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.619 00:20:57.619 Latency(us) 00:20:57.619 [2024-11-29T12:04:57.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.619 [2024-11-29T12:04:57.439Z] =================================================================================================================== 00:20:57.619 [2024-11-29T12:04:57.439Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2015456 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2015410 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2015410 ']' 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2015410 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2015410 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2015410' 00:20:57.619 killing process with pid 2015410 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2015410 00:20:57.619 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2015410 00:20:57.877 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:57.877 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.877 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.877 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.877 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2017372 00:20:57.878 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:57.878 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2017372 00:20:57.878 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2017372 ']' 00:20:57.878 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.878 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.878 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.878 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.878 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.878 [2024-11-29 13:04:57.631680] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:57.878 [2024-11-29 13:04:57.631730] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.142 [2024-11-29 13:04:57.700059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.142 [2024-11-29 13:04:57.739209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.142 [2024-11-29 13:04:57.739249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.142 [2024-11-29 13:04:57.739256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.142 [2024-11-29 13:04:57.739261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.142 [2024-11-29 13:04:57.739266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.142 [2024-11-29 13:04:57.739868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.142 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.142 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:58.142 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:58.142 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.142 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.142 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.142 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.NUzaKyQQ9e 00:20:58.142 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NUzaKyQQ9e 00:20:58.142 13:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:58.401 [2024-11-29 13:04:58.044736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.401 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:58.659 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:58.659 [2024-11-29 13:04:58.421701] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:58.659 [2024-11-29 13:04:58.421915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.659 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:58.918 malloc0 00:20:58.918 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:59.176 13:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NUzaKyQQ9e 00:20:59.435 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:59.435 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2017765 00:20:59.435 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:59.435 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.435 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2017765 /var/tmp/bdevperf.sock 00:20:59.435 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2017765 ']' 00:20:59.435 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.435 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.435 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.435 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.435 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.435 [2024-11-29 13:04:59.244765] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:20:59.435 [2024-11-29 13:04:59.244810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017765 ] 00:20:59.693 [2024-11-29 13:04:59.306806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.693 [2024-11-29 13:04:59.347778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.693 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.693 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:59.693 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NUzaKyQQ9e 00:20:59.951 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:00.209 [2024-11-29 13:04:59.804513] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.209 nvme0n1 00:21:00.209 13:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:00.209 Running I/O for 1 seconds... 00:21:01.584 5215.00 IOPS, 20.37 MiB/s 00:21:01.584 Latency(us) 00:21:01.585 [2024-11-29T12:05:01.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.585 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:01.585 Verification LBA range: start 0x0 length 0x2000 00:21:01.585 nvme0n1 : 1.02 5260.28 20.55 0.00 0.00 24158.35 5157.40 48781.58 00:21:01.585 [2024-11-29T12:05:01.405Z] =================================================================================================================== 00:21:01.585 [2024-11-29T12:05:01.405Z] Total : 5260.28 20.55 0.00 0.00 24158.35 5157.40 48781.58 00:21:01.585 { 00:21:01.585 "results": [ 00:21:01.585 { 00:21:01.585 "job": "nvme0n1", 00:21:01.585 "core_mask": "0x2", 00:21:01.585 "workload": "verify", 00:21:01.585 "status": "finished", 00:21:01.585 "verify_range": { 00:21:01.585 "start": 0, 00:21:01.585 "length": 8192 00:21:01.585 }, 00:21:01.585 "queue_depth": 128, 00:21:01.585 "io_size": 4096, 00:21:01.585 "runtime": 1.015725, 00:21:01.585 "iops": 5260.282064535184, 00:21:01.585 "mibps": 20.547976814590562, 00:21:01.585 "io_failed": 0, 00:21:01.585 "io_timeout": 0, 00:21:01.585 "avg_latency_us": 24158.34792308506, 00:21:01.585 "min_latency_us": 5157.398260869565, 00:21:01.585 "max_latency_us": 48781.57913043478 00:21:01.585 } 00:21:01.585 ], 00:21:01.585 "core_count": 1 00:21:01.585 } 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2017765 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2017765 ']' 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2017765 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2017765 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2017765' 00:21:01.585 killing process with pid 2017765 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2017765 00:21:01.585 Received shutdown signal, test time was about 1.000000 seconds 00:21:01.585 00:21:01.585 Latency(us) 00:21:01.585 [2024-11-29T12:05:01.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.585 [2024-11-29T12:05:01.405Z] =================================================================================================================== 00:21:01.585 [2024-11-29T12:05:01.405Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2017765 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2017372 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2017372 ']' 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2017372 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2017372 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2017372' 00:21:01.585 killing process with pid 2017372 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2017372 00:21:01.585 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2017372 00:21:01.844 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:01.844 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:01.844 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.844 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.844 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2018031 00:21:01.844 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2018031 00:21:01.844 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:01.844 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2018031 ']' 00:21:01.844 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.844 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.844 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.844 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.844 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.844 [2024-11-29 13:05:01.514245] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:21:01.844 [2024-11-29 13:05:01.514297] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.844 [2024-11-29 13:05:01.580383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.844 [2024-11-29 13:05:01.616223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.844 [2024-11-29 13:05:01.616256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.844 [2024-11-29 13:05:01.616262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.844 [2024-11-29 13:05:01.616268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.844 [2024-11-29 13:05:01.616273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.844 [2024-11-29 13:05:01.616833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.104 [2024-11-29 13:05:01.758122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.104 malloc0 00:21:02.104 [2024-11-29 13:05:01.786395] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.104 [2024-11-29 13:05:01.786600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2018141 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2018141 /var/tmp/bdevperf.sock 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2018141 ']' 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.104 13:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.104 [2024-11-29 13:05:01.862934] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:21:02.104 [2024-11-29 13:05:01.862983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018141 ] 00:21:02.363 [2024-11-29 13:05:01.926291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.363 [2024-11-29 13:05:01.970774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.363 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.363 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:02.363 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NUzaKyQQ9e 00:21:02.622 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:02.622 [2024-11-29 13:05:02.437243] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.881 nvme0n1 00:21:02.881 13:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:02.881 Running I/O for 1 seconds... 00:21:04.078 5167.00 IOPS, 20.18 MiB/s 00:21:04.078 Latency(us) 00:21:04.078 [2024-11-29T12:05:03.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.078 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:04.078 Verification LBA range: start 0x0 length 0x2000 00:21:04.078 nvme0n1 : 1.02 5190.92 20.28 0.00 0.00 24420.64 4701.50 37384.01 00:21:04.078 [2024-11-29T12:05:03.898Z] =================================================================================================================== 00:21:04.078 [2024-11-29T12:05:03.898Z] Total : 5190.92 20.28 0.00 0.00 24420.64 4701.50 37384.01 00:21:04.078 { 00:21:04.078 "results": [ 00:21:04.078 { 00:21:04.078 "job": "nvme0n1", 00:21:04.078 "core_mask": "0x2", 00:21:04.078 "workload": "verify", 00:21:04.078 "status": "finished", 00:21:04.078 "verify_range": { 00:21:04.078 "start": 0, 00:21:04.078 "length": 8192 00:21:04.078 }, 00:21:04.078 "queue_depth": 128, 00:21:04.078 "io_size": 4096, 00:21:04.078 "runtime": 1.020244, 00:21:04.078 "iops": 5190.915114423608, 00:21:04.078 "mibps": 20.27701216571722, 00:21:04.078 "io_failed": 0, 00:21:04.078 "io_timeout": 0, 00:21:04.078 "avg_latency_us": 24420.639306449495, 00:21:04.078 "min_latency_us": 4701.495652173913, 00:21:04.078 "max_latency_us": 37384.013913043476 00:21:04.078 } 00:21:04.078 ], 00:21:04.078 "core_count": 1 00:21:04.078 } 00:21:04.078 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:04.078 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.078 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.078 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.078 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:04.078 "subsystems": [ 00:21:04.078 { 00:21:04.078 "subsystem": "keyring", 00:21:04.078 "config": [ 00:21:04.078 { 00:21:04.078 "method": "keyring_file_add_key", 00:21:04.078 "params": { 00:21:04.078 "name": "key0", 00:21:04.078 "path": "/tmp/tmp.NUzaKyQQ9e" 00:21:04.078 } 00:21:04.078 } 00:21:04.078 ] 00:21:04.078 }, 00:21:04.078 { 00:21:04.078 "subsystem": "iobuf", 00:21:04.078 "config": [ 00:21:04.078 { 00:21:04.078 "method": "iobuf_set_options", 00:21:04.078 "params": { 00:21:04.078 "small_pool_count": 8192, 00:21:04.078 "large_pool_count": 1024, 00:21:04.078 "small_bufsize": 8192, 00:21:04.078 "large_bufsize": 135168, 00:21:04.078 "enable_numa": false 00:21:04.078 } 00:21:04.078 } 00:21:04.078 ] 00:21:04.078 }, 00:21:04.078 { 00:21:04.078 "subsystem": "sock", 00:21:04.078 "config": [ 00:21:04.078 { 00:21:04.078 "method": "sock_set_default_impl", 00:21:04.078 "params": { 00:21:04.078 "impl_name": "posix" 00:21:04.078 } 00:21:04.078 }, 00:21:04.078 { 00:21:04.078 "method": "sock_impl_set_options", 00:21:04.078 "params": { 00:21:04.078 "impl_name": "ssl", 00:21:04.078 "recv_buf_size": 4096, 00:21:04.078 "send_buf_size": 4096, 00:21:04.078 "enable_recv_pipe": true, 00:21:04.078 "enable_quickack": false, 00:21:04.078 "enable_placement_id": 0, 00:21:04.078 "enable_zerocopy_send_server": true, 00:21:04.078 "enable_zerocopy_send_client": false, 00:21:04.078 "zerocopy_threshold": 0, 00:21:04.078 "tls_version": 0, 00:21:04.078 "enable_ktls": false 00:21:04.078 } 00:21:04.078 }, 00:21:04.078 { 00:21:04.078 "method": "sock_impl_set_options", 00:21:04.078 "params": { 00:21:04.078 "impl_name": "posix", 00:21:04.078 "recv_buf_size": 2097152, 00:21:04.078 "send_buf_size": 2097152, 00:21:04.078 "enable_recv_pipe": true, 00:21:04.078 "enable_quickack": false, 00:21:04.078 "enable_placement_id": 0, 00:21:04.078 "enable_zerocopy_send_server": true, 00:21:04.078 "enable_zerocopy_send_client": false, 00:21:04.078 "zerocopy_threshold": 0, 00:21:04.078 "tls_version": 0, 00:21:04.078 "enable_ktls": false 00:21:04.078 } 00:21:04.078 } 00:21:04.078 ] 00:21:04.078 }, 00:21:04.078 { 00:21:04.078 "subsystem": "vmd", 00:21:04.078 "config": [] 00:21:04.078 }, 00:21:04.078 { 00:21:04.078 "subsystem": "accel", 00:21:04.078 "config": [ 00:21:04.078 { 00:21:04.078 "method": "accel_set_options", 00:21:04.078 "params": { 00:21:04.078 "small_cache_size": 128, 00:21:04.078 "large_cache_size": 16, 00:21:04.078 "task_count": 2048, 00:21:04.078 "sequence_count": 2048, 00:21:04.078 "buf_count": 2048 00:21:04.078 } 00:21:04.079 } 00:21:04.079 ] 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "subsystem": "bdev", 00:21:04.079 "config": [ 00:21:04.079 { 00:21:04.079 "method": "bdev_set_options", 00:21:04.079 "params": { 00:21:04.079 "bdev_io_pool_size": 65535, 00:21:04.079 "bdev_io_cache_size": 256, 00:21:04.079 "bdev_auto_examine": true, 00:21:04.079 "iobuf_small_cache_size": 128, 00:21:04.079 "iobuf_large_cache_size": 16 00:21:04.079 } 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "method": "bdev_raid_set_options", 00:21:04.079 "params": { 00:21:04.079 "process_window_size_kb": 1024, 00:21:04.079 "process_max_bandwidth_mb_sec": 0 00:21:04.079 } 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "method": "bdev_iscsi_set_options", 00:21:04.079 "params": { 00:21:04.079 "timeout_sec": 30 00:21:04.079 } 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "method": "bdev_nvme_set_options", 00:21:04.079 "params": { 00:21:04.079 "action_on_timeout": "none", 00:21:04.079 "timeout_us": 0, 00:21:04.079 "timeout_admin_us": 0, 00:21:04.079 "keep_alive_timeout_ms": 10000, 00:21:04.079 "arbitration_burst": 0, 00:21:04.079 "low_priority_weight": 0, 00:21:04.079 "medium_priority_weight": 0, 00:21:04.079 "high_priority_weight": 0, 00:21:04.079 "nvme_adminq_poll_period_us": 10000, 00:21:04.079 "nvme_ioq_poll_period_us": 0, 00:21:04.079 "io_queue_requests": 0, 00:21:04.079 "delay_cmd_submit": true, 00:21:04.079 "transport_retry_count": 4, 00:21:04.079 "bdev_retry_count": 3, 00:21:04.079 "transport_ack_timeout": 0, 00:21:04.079 "ctrlr_loss_timeout_sec": 0, 00:21:04.079 "reconnect_delay_sec": 0, 00:21:04.079 "fast_io_fail_timeout_sec": 0, 00:21:04.079 "disable_auto_failback": false, 00:21:04.079 "generate_uuids": false, 00:21:04.079 "transport_tos": 0, 00:21:04.079 "nvme_error_stat": false, 00:21:04.079 "rdma_srq_size": 0, 00:21:04.079 "io_path_stat": false, 00:21:04.079 "allow_accel_sequence": false, 00:21:04.079 "rdma_max_cq_size": 0, 00:21:04.079 "rdma_cm_event_timeout_ms": 0, 00:21:04.079 "dhchap_digests": [ 00:21:04.079 "sha256", 00:21:04.079 "sha384", 00:21:04.079 "sha512" 00:21:04.079 ], 00:21:04.079 "dhchap_dhgroups": [ 00:21:04.079 "null", 00:21:04.079 "ffdhe2048", 00:21:04.079 "ffdhe3072", 00:21:04.079 "ffdhe4096", 00:21:04.079 "ffdhe6144", 00:21:04.079 "ffdhe8192" 00:21:04.079 ] 00:21:04.079 } 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "method": "bdev_nvme_set_hotplug", 00:21:04.079 "params": { 00:21:04.079 "period_us": 100000, 00:21:04.079 "enable": false 00:21:04.079 } 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "method": "bdev_malloc_create", 00:21:04.079 "params": { 00:21:04.079 "name": "malloc0", 00:21:04.079 "num_blocks": 8192, 00:21:04.079 "block_size": 4096, 00:21:04.079 "physical_block_size": 4096, 00:21:04.079 "uuid": "1ec23d18-7240-4a72-8f1f-8b58e019bfaf", 00:21:04.079 "optimal_io_boundary": 0, 00:21:04.079 "md_size": 0, 00:21:04.079 "dif_type": 0, 00:21:04.079 "dif_is_head_of_md": false, 00:21:04.079 "dif_pi_format": 0 00:21:04.079 } 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "method": "bdev_wait_for_examine" 00:21:04.079 } 00:21:04.079 ] 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "subsystem": "nbd", 00:21:04.079 "config": [] 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "subsystem": "scheduler", 00:21:04.079 "config": [ 00:21:04.079 { 00:21:04.079 "method": "framework_set_scheduler", 00:21:04.079 "params": { 00:21:04.079 "name": "static" 00:21:04.079 } 00:21:04.079 } 00:21:04.079 ] 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "subsystem": "nvmf", 00:21:04.079 "config": [ 00:21:04.079 { 00:21:04.079 "method": "nvmf_set_config", 00:21:04.079 "params": { 00:21:04.079 "discovery_filter": "match_any", 00:21:04.079 "admin_cmd_passthru": { 00:21:04.079 "identify_ctrlr": false 00:21:04.079 }, 00:21:04.079 "dhchap_digests": [ 00:21:04.079 "sha256", 00:21:04.079 "sha384", 00:21:04.079 "sha512" 00:21:04.079 ], 00:21:04.079 "dhchap_dhgroups": [ 00:21:04.079 "null", 00:21:04.079 "ffdhe2048", 00:21:04.079 "ffdhe3072", 00:21:04.079 "ffdhe4096", 00:21:04.079 "ffdhe6144", 00:21:04.079 "ffdhe8192" 00:21:04.079 ] 00:21:04.079 } 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "method": "nvmf_set_max_subsystems", 00:21:04.079 "params": { 00:21:04.079 "max_subsystems": 1024 00:21:04.079 } 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "method": "nvmf_set_crdt", 00:21:04.079 "params": { 00:21:04.079 "crdt1": 0, 00:21:04.079 "crdt2": 0, 00:21:04.079 "crdt3": 0 00:21:04.079 } 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "method": "nvmf_create_transport", 00:21:04.079 "params": { 00:21:04.079 "trtype": "TCP", 00:21:04.079 "max_queue_depth": 128, 00:21:04.079 "max_io_qpairs_per_ctrlr": 127, 00:21:04.079 "in_capsule_data_size": 4096, 00:21:04.079 "max_io_size": 131072, 00:21:04.079 "io_unit_size": 131072, 00:21:04.079 "max_aq_depth": 128, 00:21:04.079 "num_shared_buffers": 511, 00:21:04.079 "buf_cache_size": 4294967295, 00:21:04.079 "dif_insert_or_strip": false, 00:21:04.079 "zcopy": false, 00:21:04.079 "c2h_success": false, 00:21:04.079 "sock_priority": 0, 00:21:04.079 "abort_timeout_sec": 1, 00:21:04.079 "ack_timeout": 0, 00:21:04.079 "data_wr_pool_size": 0 00:21:04.079 } 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "method": "nvmf_create_subsystem", 00:21:04.079 "params": { 00:21:04.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.079 "allow_any_host": false, 00:21:04.079 "serial_number": "00000000000000000000", 00:21:04.079 "model_number": "SPDK bdev Controller", 00:21:04.079 "max_namespaces": 32, 00:21:04.079 "min_cntlid": 1, 00:21:04.079 "max_cntlid": 65519, 00:21:04.079 "ana_reporting": false 00:21:04.079 } 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "method": "nvmf_subsystem_add_host", 00:21:04.079 "params": { 00:21:04.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.079 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.079 "psk": "key0" 00:21:04.079 } 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "method": "nvmf_subsystem_add_ns", 00:21:04.079 "params": { 00:21:04.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.079 "namespace": { 00:21:04.079 "nsid": 1, 00:21:04.079 "bdev_name": "malloc0", 00:21:04.079 "nguid": "1EC23D1872404A728F1F8B58E019BFAF", 00:21:04.079 "uuid": "1ec23d18-7240-4a72-8f1f-8b58e019bfaf", 00:21:04.079 "no_auto_visible": false 00:21:04.079 } 00:21:04.079 } 00:21:04.079 }, 00:21:04.079 { 00:21:04.079 "method": "nvmf_subsystem_add_listener", 00:21:04.079 "params": { 00:21:04.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.079 "listen_address": { 00:21:04.079 "trtype": "TCP", 00:21:04.079 "adrfam": "IPv4", 00:21:04.079 "traddr": "10.0.0.2", 00:21:04.079 "trsvcid": "4420" 00:21:04.079 }, 00:21:04.079 "secure_channel": false, 00:21:04.079 "sock_impl": "ssl" 00:21:04.079 } 00:21:04.079 } 00:21:04.079 ] 00:21:04.079 } 00:21:04.079 ] 00:21:04.079 }' 00:21:04.079 13:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:04.339 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:04.339 "subsystems": [ 00:21:04.339 { 00:21:04.339 "subsystem": "keyring", 00:21:04.339 "config": [ 00:21:04.339 { 00:21:04.339 "method": "keyring_file_add_key", 00:21:04.339 "params": { 00:21:04.339 "name": "key0", 00:21:04.339 "path": "/tmp/tmp.NUzaKyQQ9e" 00:21:04.339 } 00:21:04.339 } 00:21:04.339 ] 00:21:04.339 }, 00:21:04.339 { 00:21:04.339 "subsystem": "iobuf", 00:21:04.339 "config": [ 00:21:04.339 { 00:21:04.339 "method": "iobuf_set_options", 00:21:04.339 "params": { 00:21:04.339 "small_pool_count": 8192, 00:21:04.339 "large_pool_count": 1024, 00:21:04.339 "small_bufsize": 8192, 00:21:04.339 "large_bufsize": 135168, 00:21:04.339 "enable_numa": false 00:21:04.339 } 00:21:04.339 } 00:21:04.339 ] 00:21:04.339 }, 00:21:04.339 { 00:21:04.339 "subsystem": "sock", 00:21:04.339 "config": [ 00:21:04.339 { 00:21:04.339 "method": "sock_set_default_impl", 00:21:04.339 "params": { 00:21:04.339 "impl_name": "posix" 00:21:04.339 } 00:21:04.339 }, 00:21:04.339 { 00:21:04.339 "method": "sock_impl_set_options", 00:21:04.339 "params": { 00:21:04.339 "impl_name": "ssl", 00:21:04.339 "recv_buf_size": 4096, 00:21:04.339 "send_buf_size": 4096, 00:21:04.339 "enable_recv_pipe": true, 00:21:04.339 "enable_quickack": false, 00:21:04.339 "enable_placement_id": 0, 00:21:04.339 "enable_zerocopy_send_server": true, 00:21:04.339 "enable_zerocopy_send_client": false, 00:21:04.339 "zerocopy_threshold": 0, 00:21:04.339 "tls_version": 0, 00:21:04.339 "enable_ktls": false 00:21:04.340 } 00:21:04.340 }, 00:21:04.340 { 00:21:04.340 "method": "sock_impl_set_options", 00:21:04.340 "params": { 00:21:04.340 "impl_name": "posix", 00:21:04.340 "recv_buf_size": 2097152, 00:21:04.340 "send_buf_size": 2097152, 00:21:04.340 "enable_recv_pipe": true, 00:21:04.340 "enable_quickack": false, 00:21:04.340 "enable_placement_id": 0, 00:21:04.340 "enable_zerocopy_send_server": true, 00:21:04.340 "enable_zerocopy_send_client": false, 00:21:04.340 "zerocopy_threshold": 0, 00:21:04.340 "tls_version": 0, 00:21:04.340 "enable_ktls": false 00:21:04.340 } 00:21:04.340 } 00:21:04.340 ] 00:21:04.340 }, 00:21:04.340 { 00:21:04.340 "subsystem": "vmd", 00:21:04.340 "config": [] 00:21:04.340 }, 00:21:04.340 { 00:21:04.340 "subsystem": "accel", 00:21:04.340 "config": [ 00:21:04.340 { 00:21:04.340 "method": "accel_set_options", 00:21:04.340 "params": { 00:21:04.340 "small_cache_size": 128, 00:21:04.340 "large_cache_size": 16, 00:21:04.340 "task_count": 2048, 00:21:04.340 "sequence_count": 2048, 00:21:04.340 "buf_count": 2048 00:21:04.340 } 00:21:04.340 } 00:21:04.340 ] 00:21:04.340 }, 00:21:04.340 { 00:21:04.340 "subsystem": "bdev", 00:21:04.340 "config": [ 00:21:04.340 { 00:21:04.340 "method": "bdev_set_options", 00:21:04.340 "params": { 00:21:04.340 "bdev_io_pool_size": 65535, 00:21:04.340 "bdev_io_cache_size": 256, 00:21:04.340 "bdev_auto_examine": true, 00:21:04.340 "iobuf_small_cache_size": 128, 00:21:04.340 "iobuf_large_cache_size": 16 00:21:04.340 } 00:21:04.340 }, 00:21:04.340 { 00:21:04.340 "method": "bdev_raid_set_options", 00:21:04.340 "params": { 00:21:04.340 "process_window_size_kb": 1024, 00:21:04.340 "process_max_bandwidth_mb_sec": 0 00:21:04.340 } 00:21:04.340 }, 00:21:04.340 { 00:21:04.340 "method": "bdev_iscsi_set_options", 00:21:04.340 "params": { 00:21:04.340 "timeout_sec": 30 00:21:04.340 } 00:21:04.340 }, 00:21:04.340 { 00:21:04.340 "method": "bdev_nvme_set_options", 00:21:04.340 "params": { 00:21:04.340 "action_on_timeout": "none", 00:21:04.340 "timeout_us": 0, 00:21:04.340 "timeout_admin_us": 0, 00:21:04.340 "keep_alive_timeout_ms": 10000, 00:21:04.340 "arbitration_burst": 0, 00:21:04.340 "low_priority_weight": 0, 00:21:04.340 "medium_priority_weight": 0, 00:21:04.340 "high_priority_weight": 0, 00:21:04.340 "nvme_adminq_poll_period_us": 10000, 00:21:04.340 "nvme_ioq_poll_period_us": 0, 00:21:04.340 "io_queue_requests": 512, 00:21:04.340 "delay_cmd_submit": true, 00:21:04.340 "transport_retry_count": 4, 00:21:04.340 "bdev_retry_count": 3, 00:21:04.340 "transport_ack_timeout": 0, 00:21:04.340 "ctrlr_loss_timeout_sec": 0, 00:21:04.340 "reconnect_delay_sec": 0, 00:21:04.340 "fast_io_fail_timeout_sec": 0, 00:21:04.340 "disable_auto_failback": false, 00:21:04.340 "generate_uuids": false, 00:21:04.340 "transport_tos": 0, 00:21:04.340 "nvme_error_stat": false, 00:21:04.340 "rdma_srq_size": 0, 00:21:04.340 "io_path_stat": false, 00:21:04.340 "allow_accel_sequence": false, 00:21:04.340 "rdma_max_cq_size": 0, 00:21:04.340 "rdma_cm_event_timeout_ms": 0, 00:21:04.340 "dhchap_digests": [ 00:21:04.340 "sha256", 00:21:04.340 "sha384", 00:21:04.340 "sha512" 00:21:04.340 ], 00:21:04.340 "dhchap_dhgroups": [ 00:21:04.340 "null", 00:21:04.340 "ffdhe2048", 00:21:04.340 "ffdhe3072", 00:21:04.340 "ffdhe4096", 00:21:04.340 "ffdhe6144", 00:21:04.340 "ffdhe8192" 00:21:04.340 ] 00:21:04.340 } 00:21:04.340 }, 00:21:04.340 { 00:21:04.340 "method": "bdev_nvme_attach_controller", 00:21:04.340 "params": { 00:21:04.340 "name": "nvme0", 00:21:04.340 "trtype": "TCP", 00:21:04.340 "adrfam": "IPv4", 00:21:04.340 "traddr": "10.0.0.2", 00:21:04.340 "trsvcid": "4420", 00:21:04.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.340 "prchk_reftag": false, 00:21:04.340 "prchk_guard": false, 00:21:04.340 "ctrlr_loss_timeout_sec": 0, 00:21:04.340 "reconnect_delay_sec": 0, 00:21:04.340 "fast_io_fail_timeout_sec": 0, 00:21:04.340 "psk": "key0", 00:21:04.340 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.340 "hdgst": false, 00:21:04.340 "ddgst": false, 00:21:04.340 "multipath": "multipath" 00:21:04.340 } 00:21:04.340 }, 00:21:04.340 { 00:21:04.340 "method": "bdev_nvme_set_hotplug", 00:21:04.340 "params": { 00:21:04.340 "period_us": 100000, 00:21:04.340 "enable": false 00:21:04.340 } 00:21:04.340 }, 00:21:04.340 { 00:21:04.340 "method": "bdev_enable_histogram", 00:21:04.340 "params": { 00:21:04.340 "name": "nvme0n1", 00:21:04.340 "enable": true 00:21:04.340 } 00:21:04.340 }, 00:21:04.340 { 00:21:04.340 "method": "bdev_wait_for_examine" 00:21:04.340 } 00:21:04.340 ] 00:21:04.340 }, 00:21:04.340 { 00:21:04.340 "subsystem": "nbd", 00:21:04.340 "config": [] 00:21:04.340 } 00:21:04.340 ] 00:21:04.340 }' 00:21:04.340 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2018141 00:21:04.340 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2018141 ']' 00:21:04.340 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2018141 00:21:04.340 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:04.340 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.340 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018141 00:21:04.340 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.340 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.340 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018141' 00:21:04.340 killing process with pid 2018141 00:21:04.340 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2018141 00:21:04.340 Received shutdown signal, test time was about 1.000000 seconds 00:21:04.340 00:21:04.340 Latency(us) 00:21:04.340 [2024-11-29T12:05:04.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.340 [2024-11-29T12:05:04.160Z] =================================================================================================================== 00:21:04.340 [2024-11-29T12:05:04.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.340 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2018141 00:21:04.600 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2018031 00:21:04.600 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2018031 ']' 00:21:04.600 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2018031 00:21:04.600 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:04.600 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.600 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018031 00:21:04.600 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:04.600 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:04.600 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018031' 00:21:04.600 killing process with pid 2018031 00:21:04.600 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2018031 00:21:04.600 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2018031 00:21:04.861 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:04.861 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.861 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.861 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:04.861 "subsystems": [ 00:21:04.861 { 00:21:04.861 "subsystem": "keyring", 00:21:04.861 "config": [ 00:21:04.861 { 00:21:04.861 "method": "keyring_file_add_key", 00:21:04.861 "params": { 00:21:04.861 "name": "key0", 00:21:04.861 "path": "/tmp/tmp.NUzaKyQQ9e" 00:21:04.861 } 00:21:04.861 } 00:21:04.861 ] 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "subsystem": "iobuf", 00:21:04.861 "config": [ 00:21:04.861 { 00:21:04.861 "method": "iobuf_set_options", 00:21:04.861 "params": { 00:21:04.861 "small_pool_count": 8192, 00:21:04.861 "large_pool_count": 1024, 00:21:04.861 "small_bufsize": 8192, 00:21:04.861 "large_bufsize": 135168, 00:21:04.861 "enable_numa": false 00:21:04.861 } 00:21:04.861 } 00:21:04.861 ] 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "subsystem": "sock", 00:21:04.861 "config": [ 00:21:04.861 { 00:21:04.861 "method": "sock_set_default_impl", 00:21:04.861 "params": { 00:21:04.861 "impl_name": "posix" 00:21:04.861 } 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "method": "sock_impl_set_options", 00:21:04.861 "params": { 00:21:04.861 "impl_name": "ssl", 00:21:04.861 "recv_buf_size": 4096, 00:21:04.861 "send_buf_size": 4096, 00:21:04.861 "enable_recv_pipe": true, 00:21:04.861 "enable_quickack": false, 00:21:04.861 "enable_placement_id": 0, 00:21:04.861 "enable_zerocopy_send_server": true, 00:21:04.861 "enable_zerocopy_send_client": false, 00:21:04.861 "zerocopy_threshold": 0, 00:21:04.861 "tls_version": 0, 00:21:04.861 "enable_ktls": false 00:21:04.861 } 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "method": "sock_impl_set_options", 00:21:04.861 "params": { 00:21:04.861 "impl_name": "posix", 00:21:04.861 "recv_buf_size": 2097152, 00:21:04.861 "send_buf_size": 2097152, 00:21:04.861 "enable_recv_pipe": true, 00:21:04.861 "enable_quickack": false, 00:21:04.861 "enable_placement_id": 0, 00:21:04.861 "enable_zerocopy_send_server": true, 00:21:04.861 "enable_zerocopy_send_client": false, 00:21:04.861 "zerocopy_threshold": 0, 00:21:04.861 "tls_version": 0, 00:21:04.861 "enable_ktls": false 00:21:04.861 } 00:21:04.861 } 00:21:04.861 ] 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "subsystem": "vmd", 00:21:04.861 "config": [] 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "subsystem": "accel", 00:21:04.861 "config": [ 00:21:04.861 { 00:21:04.861 "method": "accel_set_options", 00:21:04.861 "params": { 00:21:04.861 "small_cache_size": 128, 00:21:04.861 "large_cache_size": 16, 00:21:04.861 "task_count": 2048, 00:21:04.861 "sequence_count": 2048, 00:21:04.861 "buf_count": 2048 00:21:04.861 } 00:21:04.861 } 00:21:04.861 ] 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "subsystem": "bdev", 00:21:04.861 "config": [ 00:21:04.861 { 00:21:04.861 "method": "bdev_set_options", 00:21:04.861 "params": { 00:21:04.861 "bdev_io_pool_size": 65535, 00:21:04.861 "bdev_io_cache_size": 256, 00:21:04.861 "bdev_auto_examine": true, 00:21:04.861 "iobuf_small_cache_size": 128, 00:21:04.861 "iobuf_large_cache_size": 16 00:21:04.861 } 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "method": "bdev_raid_set_options", 00:21:04.861 "params": { 00:21:04.861 "process_window_size_kb": 1024, 00:21:04.861 "process_max_bandwidth_mb_sec": 0 00:21:04.861 } 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "method": "bdev_iscsi_set_options", 00:21:04.861 "params": { 00:21:04.861 "timeout_sec": 30 00:21:04.861 } 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "method": "bdev_nvme_set_options", 00:21:04.861 "params": { 00:21:04.861 "action_on_timeout": "none", 00:21:04.861 "timeout_us": 0, 00:21:04.861 "timeout_admin_us": 0, 00:21:04.861 "keep_alive_timeout_ms": 10000, 00:21:04.861 "arbitration_burst": 0, 00:21:04.861 "low_priority_weight": 0, 00:21:04.861 "medium_priority_weight": 0, 00:21:04.861 "high_priority_weight": 0, 00:21:04.861 "nvme_adminq_poll_period_us": 10000, 00:21:04.861 "nvme_ioq_poll_period_us": 0, 00:21:04.861 "io_queue_requests": 0, 00:21:04.861 "delay_cmd_submit": true, 00:21:04.861 "transport_retry_count": 4, 00:21:04.861 "bdev_retry_count": 3, 00:21:04.861 "transport_ack_timeout": 0, 00:21:04.861 "ctrlr_loss_timeout_sec": 0, 00:21:04.861 "reconnect_delay_sec": 0, 00:21:04.861 "fast_io_fail_timeout_sec": 0, 00:21:04.861 "disable_auto_failback": false, 00:21:04.861 "generate_uuids": false, 00:21:04.861 "transport_tos": 0, 00:21:04.861 "nvme_error_stat": false, 00:21:04.861 "rdma_srq_size": 0, 00:21:04.861 "io_path_stat": false, 00:21:04.861 "allow_accel_sequence": false, 00:21:04.861 "rdma_max_cq_size": 0, 00:21:04.861 "rdma_cm_event_timeout_ms": 0, 00:21:04.861 "dhchap_digests": [ 00:21:04.861 "sha256", 00:21:04.861 "sha384", 00:21:04.861 "sha512" 00:21:04.861 ], 00:21:04.861 "dhchap_dhgroups": [ 00:21:04.861 "null", 00:21:04.861 "ffdhe2048", 00:21:04.861 "ffdhe3072", 00:21:04.861 "ffdhe4096", 00:21:04.861 "ffdhe6144", 00:21:04.861 "ffdhe8192" 00:21:04.861 ] 00:21:04.861 } 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "method": "bdev_nvme_set_hotplug", 00:21:04.861 "params": { 00:21:04.861 "period_us": 100000, 00:21:04.861 "enable": false 00:21:04.861 } 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "method": "bdev_malloc_create", 00:21:04.861 "params": { 00:21:04.861 "name": "malloc0", 00:21:04.861 "num_blocks": 8192, 00:21:04.861 "block_size": 4096, 00:21:04.861 "physical_block_size": 4096, 00:21:04.861 "uuid": "1ec23d18-7240-4a72-8f1f-8b58e019bfaf", 00:21:04.861 "optimal_io_boundary": 0, 00:21:04.861 "md_size": 0, 00:21:04.861 "dif_type": 0, 00:21:04.861 "dif_is_head_of_md": false, 00:21:04.861 "dif_pi_format": 0 00:21:04.861 } 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "method": "bdev_wait_for_examine" 00:21:04.861 } 00:21:04.861 ] 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "subsystem": "nbd", 00:21:04.861 "config": [] 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "subsystem": "scheduler", 00:21:04.861 "config": [ 00:21:04.861 { 00:21:04.861 "method": "framework_set_scheduler", 00:21:04.861 "params": { 00:21:04.861 "name": "static" 00:21:04.861 } 00:21:04.861 } 00:21:04.861 ] 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "subsystem": "nvmf", 00:21:04.861 "config": [ 00:21:04.861 { 00:21:04.861 "method": "nvmf_set_config", 00:21:04.861 "params": { 00:21:04.861 "discovery_filter": "match_any", 00:21:04.861 "admin_cmd_passthru": { 00:21:04.861 "identify_ctrlr": false 00:21:04.861 }, 00:21:04.861 "dhchap_digests": [ 00:21:04.861 "sha256", 00:21:04.861 "sha384", 00:21:04.861 "sha512" 00:21:04.861 ], 00:21:04.861 "dhchap_dhgroups": [ 00:21:04.861 "null", 00:21:04.861 "ffdhe2048", 00:21:04.861 "ffdhe3072", 00:21:04.861 "ffdhe4096", 00:21:04.861 "ffdhe6144", 00:21:04.861 "ffdhe8192" 00:21:04.861 ] 00:21:04.861 } 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "method": "nvmf_set_max_subsystems", 00:21:04.861 "params": { 00:21:04.861 "max_subsystems": 1024 00:21:04.861 } 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "method": "nvmf_set_crdt", 00:21:04.861 "params": { 00:21:04.861 "crdt1": 0, 00:21:04.861 "crdt2": 0, 00:21:04.861 "crdt3": 0 00:21:04.861 } 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "method": "nvmf_create_transport", 00:21:04.861 "params": { 00:21:04.861 "trtype": "TCP", 00:21:04.861 "max_queue_depth": 128, 00:21:04.861 "max_io_qpairs_per_ctrlr": 127, 00:21:04.861 "in_capsule_data_size": 4096, 00:21:04.861 "max_io_size": 131072, 00:21:04.861 "io_unit_size": 131072, 00:21:04.861 "max_aq_depth": 128, 00:21:04.861 "num_shared_buffers": 511, 00:21:04.861 "buf_cache_size": 4294967295, 00:21:04.861 "dif_insert_or_strip": false, 00:21:04.861 "zcopy": false, 00:21:04.861 "c2h_success": false, 00:21:04.861 "sock_priority": 0, 00:21:04.861 "abort_timeout_sec": 1, 00:21:04.861 "ack_timeout": 0, 00:21:04.861 "data_wr_pool_size": 0 00:21:04.861 } 00:21:04.861 }, 00:21:04.861 { 00:21:04.861 "method": "nvmf_create_subsystem", 00:21:04.861 "params": { 00:21:04.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.861 "allow_any_host": false, 00:21:04.861 "serial_number": "00000000000000000000", 00:21:04.861 "model_number": "SPDK bdev Controller", 00:21:04.861 "max_namespaces": 32, 00:21:04.861 "min_cntlid": 1, 00:21:04.861 "max_cntlid": 65519, 00:21:04.862 "ana_reporting": false 00:21:04.862 } 00:21:04.862 }, 00:21:04.862 { 00:21:04.862 "method": "nvmf_subsystem_add_host", 00:21:04.862 "params": { 00:21:04.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.862 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.862 "psk": "key0" 00:21:04.862 } 00:21:04.862 }, 00:21:04.862 { 00:21:04.862 "method": "nvmf_subsystem_add_ns", 00:21:04.862 "params": { 00:21:04.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.862 "namespace": { 00:21:04.862 "nsid": 1, 00:21:04.862 "bdev_name": "malloc0", 00:21:04.862 "nguid": "1EC23D1872404A728F1F8B58E019BFAF", 00:21:04.862 "uuid": "1ec23d18-7240-4a72-8f1f-8b58e019bfaf", 00:21:04.862 "no_auto_visible": false 00:21:04.862 } 00:21:04.862 } 00:21:04.862 }, 00:21:04.862 { 00:21:04.862 "method": "nvmf_subsystem_add_listener", 00:21:04.862 "params": { 00:21:04.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.862 "listen_address": { 00:21:04.862 "trtype": "TCP", 00:21:04.862 "adrfam": "IPv4", 00:21:04.862 "traddr": "10.0.0.2", 00:21:04.862 "trsvcid": "4420" 00:21:04.862 }, 00:21:04.862 "secure_channel": false, 00:21:04.862 "sock_impl": "ssl" 00:21:04.862 } 00:21:04.862 } 00:21:04.862 ] 00:21:04.862 } 00:21:04.862 ] 00:21:04.862 }' 00:21:04.862 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.862 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2018535 00:21:04.862 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:04.862 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2018535 00:21:04.862 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2018535 ']' 00:21:04.862 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.862 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.862 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.862 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.862 13:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.862 [2024-11-29 13:05:04.524521] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:21:04.862 [2024-11-29 13:05:04.524573] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.862 [2024-11-29 13:05:04.591405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.862 [2024-11-29 13:05:04.633419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.862 [2024-11-29 13:05:04.633456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.862 [2024-11-29 13:05:04.633464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.862 [2024-11-29 13:05:04.633470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.862 [2024-11-29 13:05:04.633476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.862 [2024-11-29 13:05:04.634076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.122 [2024-11-29 13:05:04.848379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.122 [2024-11-29 13:05:04.880418] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.122 [2024-11-29 13:05:04.880616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2018777 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2018777 /var/tmp/bdevperf.sock 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2018777 ']' 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:05.690 "subsystems": [ 00:21:05.690 { 00:21:05.690 "subsystem": "keyring", 00:21:05.690 "config": [ 00:21:05.690 { 00:21:05.690 "method": "keyring_file_add_key", 00:21:05.690 "params": { 00:21:05.690 "name": "key0", 00:21:05.690 "path": "/tmp/tmp.NUzaKyQQ9e" 00:21:05.690 } 00:21:05.690 } 00:21:05.690 ] 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "subsystem": "iobuf", 00:21:05.690 "config": [ 00:21:05.690 { 00:21:05.690 "method": "iobuf_set_options", 00:21:05.690 "params": { 00:21:05.690 "small_pool_count": 8192, 00:21:05.690 "large_pool_count": 1024, 00:21:05.690 "small_bufsize": 8192, 00:21:05.690 "large_bufsize": 135168, 00:21:05.690 "enable_numa": false 00:21:05.690 } 00:21:05.690 } 00:21:05.690 ] 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "subsystem": "sock", 00:21:05.690 "config": [ 00:21:05.690 { 00:21:05.690 "method": "sock_set_default_impl", 00:21:05.690 "params": { 00:21:05.690 "impl_name": "posix" 00:21:05.690 } 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "method": "sock_impl_set_options", 00:21:05.690 "params": { 00:21:05.690 "impl_name": "ssl", 00:21:05.690 "recv_buf_size": 4096, 00:21:05.690 "send_buf_size": 4096, 00:21:05.690 "enable_recv_pipe": true, 00:21:05.690 "enable_quickack": false, 00:21:05.690 "enable_placement_id": 0, 00:21:05.690 "enable_zerocopy_send_server": true, 00:21:05.690 "enable_zerocopy_send_client": false, 00:21:05.690 "zerocopy_threshold": 0, 00:21:05.690 "tls_version": 0, 00:21:05.690 "enable_ktls": false 00:21:05.690 } 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "method": "sock_impl_set_options", 00:21:05.690 "params": { 00:21:05.690 "impl_name": "posix", 00:21:05.690 "recv_buf_size": 2097152, 00:21:05.690 "send_buf_size": 2097152, 00:21:05.690 "enable_recv_pipe": true, 00:21:05.690 "enable_quickack": false, 00:21:05.690 "enable_placement_id": 0, 00:21:05.690 "enable_zerocopy_send_server": true, 00:21:05.690 "enable_zerocopy_send_client": false, 00:21:05.690 "zerocopy_threshold": 0, 00:21:05.690 "tls_version": 0, 00:21:05.690 "enable_ktls": false 00:21:05.690 } 00:21:05.690 } 00:21:05.690 ] 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "subsystem": "vmd", 00:21:05.690 "config": [] 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "subsystem": "accel", 00:21:05.690 "config": [ 00:21:05.690 { 00:21:05.690 "method": "accel_set_options", 00:21:05.690 "params": { 00:21:05.690 "small_cache_size": 128, 00:21:05.690 "large_cache_size": 16, 00:21:05.690 "task_count": 2048, 00:21:05.690 "sequence_count": 2048, 00:21:05.690 "buf_count": 2048 00:21:05.690 } 00:21:05.690 } 00:21:05.690 ] 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "subsystem": "bdev", 00:21:05.690 "config": [ 00:21:05.690 { 00:21:05.690 "method": "bdev_set_options", 00:21:05.690 "params": { 00:21:05.690 "bdev_io_pool_size": 65535, 00:21:05.690 "bdev_io_cache_size": 256, 00:21:05.690 "bdev_auto_examine": true, 00:21:05.690 "iobuf_small_cache_size": 128, 00:21:05.690 "iobuf_large_cache_size": 16 00:21:05.690 } 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "method": "bdev_raid_set_options", 00:21:05.690 "params": { 00:21:05.690 "process_window_size_kb": 1024, 00:21:05.690 "process_max_bandwidth_mb_sec": 0 00:21:05.690 } 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "method": "bdev_iscsi_set_options", 00:21:05.690 "params": { 00:21:05.690 "timeout_sec": 30 00:21:05.690 } 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "method": "bdev_nvme_set_options", 00:21:05.690 "params": { 00:21:05.690 "action_on_timeout": "none", 00:21:05.690 "timeout_us": 0, 00:21:05.690 "timeout_admin_us": 0, 00:21:05.690 "keep_alive_timeout_ms": 10000, 00:21:05.690 "arbitration_burst": 0, 00:21:05.690 "low_priority_weight": 0, 00:21:05.690 "medium_priority_weight": 0, 00:21:05.690 "high_priority_weight": 0, 00:21:05.690 "nvme_adminq_poll_period_us": 10000, 00:21:05.690 "nvme_ioq_poll_period_us": 0, 00:21:05.690 "io_queue_requests": 512, 00:21:05.690 "delay_cmd_submit": true, 00:21:05.690 "transport_retry_count": 4, 00:21:05.690 "bdev_retry_count": 3, 00:21:05.690 "transport_ack_timeout": 0, 00:21:05.690 "ctrlr_loss_timeout_sec": 0, 00:21:05.690 "reconnect_delay_sec": 0, 00:21:05.690 "fast_io_fail_timeout_sec": 0, 00:21:05.690 "disable_auto_failback": false, 00:21:05.690 "generate_uuids": false, 00:21:05.690 "transport_tos": 0, 00:21:05.690 "nvme_error_stat": false, 00:21:05.690 "rdma_srq_size": 0, 00:21:05.690 "io_path_stat": false, 00:21:05.690 "allow_accel_sequence": false, 00:21:05.690 "rdma_max_cq_size": 0, 00:21:05.690 "rdma_cm_event_timeout_ms": 0, 00:21:05.690 "dhchap_digests": [ 00:21:05.690 "sha256", 00:21:05.690 "sha384", 00:21:05.690 "sha512" 00:21:05.690 ], 00:21:05.690 "dhchap_dhgroups": [ 00:21:05.690 "null", 00:21:05.690 "ffdhe2048", 00:21:05.690 "ffdhe3072", 00:21:05.690 "ffdhe4096", 00:21:05.690 "ffdhe6144", 00:21:05.690 "ffdhe8192" 00:21:05.690 ] 00:21:05.690 } 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "method": "bdev_nvme_attach_controller", 00:21:05.690 "params": { 00:21:05.690 "name": "nvme0", 00:21:05.690 "trtype": "TCP", 00:21:05.690 "adrfam": "IPv4", 00:21:05.690 "traddr": "10.0.0.2", 00:21:05.690 "trsvcid": "4420", 00:21:05.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.690 "prchk_reftag": false, 00:21:05.690 "prchk_guard": false, 00:21:05.690 "ctrlr_loss_timeout_sec": 0, 00:21:05.690 "reconnect_delay_sec": 0, 00:21:05.690 "fast_io_fail_timeout_sec": 0, 00:21:05.690 "psk": "key0", 00:21:05.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.690 "hdgst": false, 00:21:05.690 "ddgst": false, 00:21:05.690 "multipath": "multipath" 00:21:05.690 } 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "method": "bdev_nvme_set_hotplug", 00:21:05.690 "params": { 00:21:05.690 "period_us": 100000, 00:21:05.690 "enable": false 00:21:05.690 } 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "method": "bdev_enable_histogram", 00:21:05.690 "params": { 00:21:05.690 "name": "nvme0n1", 00:21:05.690 "enable": true 00:21:05.690 } 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "method": "bdev_wait_for_examine" 00:21:05.690 } 00:21:05.690 ] 00:21:05.690 }, 00:21:05.690 { 00:21:05.690 "subsystem": "nbd", 00:21:05.690 "config": [] 00:21:05.690 } 00:21:05.690 ] 00:21:05.690 }' 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.690 13:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.690 [2024-11-29 13:05:05.440139] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:21:05.690 [2024-11-29 13:05:05.440187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018777 ] 00:21:05.690 [2024-11-29 13:05:05.502921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.949 [2024-11-29 13:05:05.544438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.949 [2024-11-29 13:05:05.699415] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.517 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.517 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:06.517 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:06.517 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:06.777 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.777 13:05:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:06.777 Running I/O for 1 seconds... 00:21:08.156 5218.00 IOPS, 20.38 MiB/s 00:21:08.156 Latency(us) 00:21:08.156 [2024-11-29T12:05:07.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.156 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:08.156 Verification LBA range: start 0x0 length 0x2000 00:21:08.156 nvme0n1 : 1.02 5248.50 20.50 0.00 0.00 24171.95 6468.12 30545.47 00:21:08.156 [2024-11-29T12:05:07.976Z] =================================================================================================================== 00:21:08.156 [2024-11-29T12:05:07.976Z] Total : 5248.50 20.50 0.00 0.00 24171.95 6468.12 30545.47 00:21:08.156 { 00:21:08.156 "results": [ 00:21:08.156 { 00:21:08.156 "job": "nvme0n1", 00:21:08.156 "core_mask": "0x2", 00:21:08.156 "workload": "verify", 00:21:08.156 "status": "finished", 00:21:08.156 "verify_range": { 00:21:08.156 "start": 0, 00:21:08.156 "length": 8192 00:21:08.156 }, 00:21:08.156 "queue_depth": 128, 00:21:08.156 "io_size": 4096, 00:21:08.156 "runtime": 1.018576, 00:21:08.156 "iops": 5248.503793531361, 00:21:08.156 "mibps": 20.50196794348188, 00:21:08.156 "io_failed": 0, 00:21:08.156 "io_timeout": 0, 00:21:08.156 "avg_latency_us": 24171.9498105044, 00:21:08.156 "min_latency_us": 6468.118260869565, 00:21:08.156 "max_latency_us": 30545.474782608697 00:21:08.156 } 00:21:08.156 ], 00:21:08.156 "core_count": 1 00:21:08.156 } 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:08.156 nvmf_trace.0 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2018777 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2018777 ']' 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2018777 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018777 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018777' 00:21:08.156 killing process with pid 2018777 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2018777 00:21:08.156 Received shutdown signal, test time was about 1.000000 seconds 00:21:08.156 00:21:08.156 Latency(us) 00:21:08.156 [2024-11-29T12:05:07.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.156 [2024-11-29T12:05:07.976Z] =================================================================================================================== 00:21:08.156 [2024-11-29T12:05:07.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2018777 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.156 rmmod nvme_tcp 00:21:08.156 rmmod nvme_fabrics 00:21:08.156 rmmod nvme_keyring 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2018535 ']' 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2018535 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2018535 ']' 00:21:08.156 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2018535 00:21:08.416 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:08.416 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.416 13:05:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018535 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018535' 00:21:08.416 killing process with pid 2018535 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2018535 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2018535 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.416 13:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1j8JZXr9VG /tmp/tmp.oXHSXcE3qO /tmp/tmp.NUzaKyQQ9e 00:21:10.955 00:21:10.955 real 1m18.427s 00:21:10.955 user 2m0.996s 00:21:10.955 sys 0m29.789s 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.955 ************************************ 00:21:10.955 END TEST nvmf_tls 00:21:10.955 ************************************ 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.955 ************************************ 00:21:10.955 START TEST nvmf_fips 00:21:10.955 ************************************ 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:10.955 * Looking for test storage... 00:21:10.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:10.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.955 --rc genhtml_branch_coverage=1 00:21:10.955 --rc genhtml_function_coverage=1 00:21:10.955 --rc genhtml_legend=1 00:21:10.955 --rc geninfo_all_blocks=1 00:21:10.955 --rc geninfo_unexecuted_blocks=1 00:21:10.955 00:21:10.955 ' 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:10.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.955 --rc genhtml_branch_coverage=1 00:21:10.955 --rc genhtml_function_coverage=1 00:21:10.955 --rc genhtml_legend=1 00:21:10.955 --rc geninfo_all_blocks=1 00:21:10.955 --rc geninfo_unexecuted_blocks=1 00:21:10.955 00:21:10.955 ' 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:10.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.955 --rc genhtml_branch_coverage=1 00:21:10.955 --rc genhtml_function_coverage=1 00:21:10.955 --rc genhtml_legend=1 00:21:10.955 --rc geninfo_all_blocks=1 00:21:10.955 --rc geninfo_unexecuted_blocks=1 00:21:10.955 00:21:10.955 ' 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:10.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.955 --rc genhtml_branch_coverage=1 00:21:10.955 --rc genhtml_function_coverage=1 00:21:10.955 --rc genhtml_legend=1 00:21:10.955 --rc geninfo_all_blocks=1 00:21:10.955 --rc geninfo_unexecuted_blocks=1 00:21:10.955 00:21:10.955 ' 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.955 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:10.956 Error setting digest 00:21:10.956 40D2C32C707F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:10.956 40D2C32C707F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.956 13:05:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:16.229 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:16.229 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.229 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:16.230 Found net devices under 0000:86:00.0: cvl_0_0 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:16.230 Found net devices under 0000:86:00.1: cvl_0_1 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:16.230 13:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:16.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:21:16.489 00:21:16.489 --- 10.0.0.2 ping statistics --- 00:21:16.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.489 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:21:16.489 00:21:16.489 --- 10.0.0.1 ping statistics --- 00:21:16.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.489 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2022954 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2022954 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2022954 ']' 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.489 13:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.489 [2024-11-29 13:05:16.196018] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:21:16.489 [2024-11-29 13:05:16.196069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.489 [2024-11-29 13:05:16.262978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.489 [2024-11-29 13:05:16.303567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.489 [2024-11-29 13:05:16.303604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.489 [2024-11-29 13:05:16.303612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.489 [2024-11-29 13:05:16.303618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.489 [2024-11-29 13:05:16.303623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.489 [2024-11-29 13:05:16.304203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.427 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.427 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:17.427 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:17.427 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.427 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:17.427 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.427 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:17.427 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:17.427 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:17.427 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.D2n 00:21:17.427 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:17.428 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.D2n 00:21:17.428 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.D2n 00:21:17.428 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.D2n 00:21:17.428 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:17.428 [2024-11-29 13:05:17.227599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.428 [2024-11-29 13:05:17.243610] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:17.428 [2024-11-29 13:05:17.243795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.687 malloc0 00:21:17.687 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.687 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2023090 00:21:17.687 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:17.687 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2023090 /var/tmp/bdevperf.sock 00:21:17.687 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2023090 ']' 00:21:17.687 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.687 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.687 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.687 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.687 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:17.687 [2024-11-29 13:05:17.361515] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:21:17.687 [2024-11-29 13:05:17.361570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023090 ] 00:21:17.687 [2024-11-29 13:05:17.421243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.687 [2024-11-29 13:05:17.462187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.946 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.946 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:17.946 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.D2n 00:21:17.946 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:18.213 [2024-11-29 13:05:17.898580] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.213 TLSTESTn1 00:21:18.213 13:05:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.473 Running I/O for 10 seconds... 00:21:20.346 5328.00 IOPS, 20.81 MiB/s [2024-11-29T12:05:21.103Z] 5374.50 IOPS, 20.99 MiB/s [2024-11-29T12:05:22.481Z] 5401.00 IOPS, 21.10 MiB/s [2024-11-29T12:05:23.416Z] 5421.00 IOPS, 21.18 MiB/s [2024-11-29T12:05:24.351Z] 5441.20 IOPS, 21.25 MiB/s [2024-11-29T12:05:25.293Z] 5390.67 IOPS, 21.06 MiB/s [2024-11-29T12:05:26.230Z] 5397.57 IOPS, 21.08 MiB/s [2024-11-29T12:05:27.166Z] 5368.88 IOPS, 20.97 MiB/s [2024-11-29T12:05:28.102Z] 5368.56 IOPS, 20.97 MiB/s [2024-11-29T12:05:28.361Z] 5366.60 IOPS, 20.96 MiB/s 00:21:28.541 Latency(us) 00:21:28.541 [2024-11-29T12:05:28.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.541 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:28.541 Verification LBA range: start 0x0 length 0x2000 00:21:28.541 TLSTESTn1 : 10.02 5370.80 20.98 0.00 0.00 23795.19 5214.39 21655.37 00:21:28.541 [2024-11-29T12:05:28.361Z] =================================================================================================================== 00:21:28.541 [2024-11-29T12:05:28.361Z] Total : 5370.80 20.98 0.00 0.00 23795.19 5214.39 21655.37 00:21:28.541 { 00:21:28.541 "results": [ 00:21:28.541 { 00:21:28.541 "job": "TLSTESTn1", 00:21:28.541 "core_mask": "0x4", 00:21:28.541 "workload": "verify", 00:21:28.541 "status": "finished", 00:21:28.541 "verify_range": { 00:21:28.541 "start": 0, 00:21:28.541 "length": 8192 00:21:28.541 }, 00:21:28.541 "queue_depth": 128, 00:21:28.541 "io_size": 4096, 00:21:28.542 "runtime": 10.015455, 00:21:28.542 "iops": 5370.799429481736, 00:21:28.542 "mibps": 20.979685271413032, 00:21:28.542 "io_failed": 0, 00:21:28.542 "io_timeout": 0, 00:21:28.542 "avg_latency_us": 23795.185418346206, 00:21:28.542 "min_latency_us": 5214.3860869565215, 00:21:28.542 "max_latency_us": 21655.373913043477 00:21:28.542 } 00:21:28.542 ], 00:21:28.542 "core_count": 1 00:21:28.542 } 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:28.542 nvmf_trace.0 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2023090 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2023090 ']' 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2023090 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2023090 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2023090' 00:21:28.542 killing process with pid 2023090 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2023090 00:21:28.542 Received shutdown signal, test time was about 10.000000 seconds 00:21:28.542 00:21:28.542 Latency(us) 00:21:28.542 [2024-11-29T12:05:28.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.542 [2024-11-29T12:05:28.362Z] =================================================================================================================== 00:21:28.542 [2024-11-29T12:05:28.362Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:28.542 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2023090 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:28.802 rmmod nvme_tcp 00:21:28.802 rmmod nvme_fabrics 00:21:28.802 rmmod nvme_keyring 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2022954 ']' 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2022954 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2022954 ']' 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2022954 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2022954 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2022954' 00:21:28.802 killing process with pid 2022954 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2022954 00:21:28.802 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2022954 00:21:29.062 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:29.062 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:29.062 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:29.062 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:29.062 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:29.062 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:29.062 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:29.062 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:29.062 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:29.062 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.062 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.062 13:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.969 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:30.969 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.D2n 00:21:30.969 00:21:30.969 real 0m20.425s 00:21:30.969 user 0m21.874s 00:21:30.969 sys 0m9.061s 00:21:30.969 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.969 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:30.969 ************************************ 00:21:30.969 END TEST nvmf_fips 00:21:30.969 ************************************ 00:21:31.228 13:05:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:31.228 13:05:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:31.228 13:05:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.228 13:05:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:31.228 ************************************ 00:21:31.228 START TEST nvmf_control_msg_list 00:21:31.228 ************************************ 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:31.229 * Looking for test storage... 00:21:31.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:31.229 13:05:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:31.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.229 --rc genhtml_branch_coverage=1 00:21:31.229 --rc genhtml_function_coverage=1 00:21:31.229 --rc genhtml_legend=1 00:21:31.229 --rc geninfo_all_blocks=1 00:21:31.229 --rc geninfo_unexecuted_blocks=1 00:21:31.229 00:21:31.229 ' 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:31.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.229 --rc genhtml_branch_coverage=1 00:21:31.229 --rc genhtml_function_coverage=1 00:21:31.229 --rc genhtml_legend=1 00:21:31.229 --rc geninfo_all_blocks=1 00:21:31.229 --rc geninfo_unexecuted_blocks=1 00:21:31.229 00:21:31.229 ' 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:31.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.229 --rc genhtml_branch_coverage=1 00:21:31.229 --rc genhtml_function_coverage=1 00:21:31.229 --rc genhtml_legend=1 00:21:31.229 --rc geninfo_all_blocks=1 00:21:31.229 --rc geninfo_unexecuted_blocks=1 00:21:31.229 00:21:31.229 ' 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:31.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.229 --rc genhtml_branch_coverage=1 00:21:31.229 --rc genhtml_function_coverage=1 00:21:31.229 --rc genhtml_legend=1 00:21:31.229 --rc geninfo_all_blocks=1 00:21:31.229 --rc geninfo_unexecuted_blocks=1 00:21:31.229 00:21:31.229 ' 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.229 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:31.230 13:05:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.516 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:36.517 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:36.517 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:36.517 Found net devices under 0000:86:00.0: cvl_0_0 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:36.517 Found net devices under 0000:86:00.1: cvl_0_1 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.517 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:36.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:21:36.518 00:21:36.518 --- 10.0.0.2 ping statistics --- 00:21:36.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.518 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:21:36.518 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:21:36.518 00:21:36.518 --- 10.0.0.1 ping statistics --- 00:21:36.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.518 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2028354 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2028354 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2028354 ']' 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:36.518 [2024-11-29 13:05:36.098987] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:21:36.518 [2024-11-29 13:05:36.099037] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.518 [2024-11-29 13:05:36.164260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.518 [2024-11-29 13:05:36.205482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.518 [2024-11-29 13:05:36.205520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.518 [2024-11-29 13:05:36.205527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.518 [2024-11-29 13:05:36.205533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.518 [2024-11-29 13:05:36.205538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.518 [2024-11-29 13:05:36.206132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.518 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.778 [2024-11-29 13:05:36.343127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.778 Malloc0 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:36.778 [2024-11-29 13:05:36.379474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2028375 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2028376 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2028377 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2028375 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:36.778 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:36.778 [2024-11-29 13:05:36.437857] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:36.778 [2024-11-29 13:05:36.447875] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:36.778 [2024-11-29 13:05:36.448034] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:37.730 Initializing NVMe Controllers 00:21:37.730 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:37.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:37.730 Initialization complete. Launching workers. 00:21:37.730 ======================================================== 00:21:37.730 Latency(us) 00:21:37.730 Device Information : IOPS MiB/s Average min max 00:21:37.730 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 30.00 0.12 34126.86 177.12 41300.96 00:21:37.730 ======================================================== 00:21:37.730 Total : 30.00 0.12 34126.86 177.12 41300.96 00:21:37.730 00:21:37.730 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2028376 00:21:37.989 Initializing NVMe Controllers 00:21:37.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:37.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:37.989 Initialization complete. Launching workers. 00:21:37.989 ======================================================== 00:21:37.989 Latency(us) 00:21:37.989 Device Information : IOPS MiB/s Average min max 00:21:37.989 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5995.00 23.42 166.43 139.72 367.82 00:21:37.989 ======================================================== 00:21:37.989 Total : 5995.00 23.42 166.43 139.72 367.82 00:21:37.989 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2028377 00:21:37.989 Initializing NVMe Controllers 00:21:37.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:37.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:37.989 Initialization complete. Launching workers. 00:21:37.989 ======================================================== 00:21:37.989 Latency(us) 00:21:37.989 Device Information : IOPS MiB/s Average min max 00:21:37.989 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40935.51 40767.38 41929.71 00:21:37.989 ======================================================== 00:21:37.989 Total : 25.00 0.10 40935.51 40767.38 41929.71 00:21:37.989 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:37.989 rmmod nvme_tcp 00:21:37.989 rmmod nvme_fabrics 00:21:37.989 rmmod nvme_keyring 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2028354 ']' 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2028354 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2028354 ']' 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2028354 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.989 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2028354 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2028354' 00:21:38.248 killing process with pid 2028354 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2028354 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2028354 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.248 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.784 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:40.784 00:21:40.784 real 0m9.225s 00:21:40.784 user 0m6.378s 00:21:40.784 sys 0m4.857s 00:21:40.784 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.784 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:40.784 ************************************ 00:21:40.784 END TEST nvmf_control_msg_list 00:21:40.784 ************************************ 00:21:40.784 13:05:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:40.784 13:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:40.784 13:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.784 13:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:40.784 ************************************ 00:21:40.784 START TEST nvmf_wait_for_buf 00:21:40.784 ************************************ 00:21:40.784 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:40.784 * Looking for test storage... 00:21:40.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:40.784 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:40.784 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:40.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.785 --rc genhtml_branch_coverage=1 00:21:40.785 --rc genhtml_function_coverage=1 00:21:40.785 --rc genhtml_legend=1 00:21:40.785 --rc geninfo_all_blocks=1 00:21:40.785 --rc geninfo_unexecuted_blocks=1 00:21:40.785 00:21:40.785 ' 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:40.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.785 --rc genhtml_branch_coverage=1 00:21:40.785 --rc genhtml_function_coverage=1 00:21:40.785 --rc genhtml_legend=1 00:21:40.785 --rc geninfo_all_blocks=1 00:21:40.785 --rc geninfo_unexecuted_blocks=1 00:21:40.785 00:21:40.785 ' 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:40.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.785 --rc genhtml_branch_coverage=1 00:21:40.785 --rc genhtml_function_coverage=1 00:21:40.785 --rc genhtml_legend=1 00:21:40.785 --rc geninfo_all_blocks=1 00:21:40.785 --rc geninfo_unexecuted_blocks=1 00:21:40.785 00:21:40.785 ' 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:40.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.785 --rc genhtml_branch_coverage=1 00:21:40.785 --rc genhtml_function_coverage=1 00:21:40.785 --rc genhtml_legend=1 00:21:40.785 --rc geninfo_all_blocks=1 00:21:40.785 --rc geninfo_unexecuted_blocks=1 00:21:40.785 00:21:40.785 ' 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:40.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:40.785 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:40.786 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.786 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:40.786 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:40.786 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:40.786 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.786 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.786 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.786 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:40.786 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:40.786 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:40.786 13:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:46.054 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.054 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:46.055 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:46.055 Found net devices under 0000:86:00.0: cvl_0_0 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:46.055 Found net devices under 0000:86:00.1: cvl_0_1 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:46.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:21:46.055 00:21:46.055 --- 10.0.0.2 ping statistics --- 00:21:46.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.055 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:21:46.055 00:21:46.055 --- 10.0.0.1 ping statistics --- 00:21:46.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.055 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2031911 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2031911 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2031911 ']' 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.055 [2024-11-29 13:05:45.432208] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:21:46.055 [2024-11-29 13:05:45.432254] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.055 [2024-11-29 13:05:45.498910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.055 [2024-11-29 13:05:45.540452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.055 [2024-11-29 13:05:45.540489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.055 [2024-11-29 13:05:45.540496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.055 [2024-11-29 13:05:45.540502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.055 [2024-11-29 13:05:45.540506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.055 [2024-11-29 13:05:45.541103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.055 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.056 Malloc0 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.056 [2024-11-29 13:05:45.696528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:46.056 [2024-11-29 13:05:45.720704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.056 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:46.056 [2024-11-29 13:05:45.795026] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:47.433 Initializing NVMe Controllers 00:21:47.433 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:47.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:47.433 Initialization complete. Launching workers. 00:21:47.433 ======================================================== 00:21:47.433 Latency(us) 00:21:47.433 Device Information : IOPS MiB/s Average min max 00:21:47.433 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.55 16.07 32208.23 7285.61 63846.13 00:21:47.433 ======================================================== 00:21:47.433 Total : 128.55 16.07 32208.23 7285.61 63846.13 00:21:47.433 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.433 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.433 rmmod nvme_tcp 00:21:47.433 rmmod nvme_fabrics 00:21:47.692 rmmod nvme_keyring 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2031911 ']' 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2031911 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2031911 ']' 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2031911 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2031911 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2031911' 00:21:47.692 killing process with pid 2031911 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2031911 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2031911 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:47.692 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.693 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.693 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.693 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:47.693 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.693 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.693 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.231 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:50.231 00:21:50.231 real 0m9.442s 00:21:50.231 user 0m3.568s 00:21:50.231 sys 0m4.258s 00:21:50.231 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.231 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:50.231 ************************************ 00:21:50.231 END TEST nvmf_wait_for_buf 00:21:50.231 ************************************ 00:21:50.231 13:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:50.231 13:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:50.231 13:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:50.231 13:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:50.231 13:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:50.231 13:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:55.509 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:55.509 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:55.509 Found net devices under 0000:86:00.0: cvl_0_0 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:55.509 Found net devices under 0000:86:00.1: cvl_0_1 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.509 13:05:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:55.509 ************************************ 00:21:55.509 START TEST nvmf_perf_adq 00:21:55.509 ************************************ 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:55.509 * Looking for test storage... 00:21:55.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.509 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:55.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.510 --rc genhtml_branch_coverage=1 00:21:55.510 --rc genhtml_function_coverage=1 00:21:55.510 --rc genhtml_legend=1 00:21:55.510 --rc geninfo_all_blocks=1 00:21:55.510 --rc geninfo_unexecuted_blocks=1 00:21:55.510 00:21:55.510 ' 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:55.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.510 --rc genhtml_branch_coverage=1 00:21:55.510 --rc genhtml_function_coverage=1 00:21:55.510 --rc genhtml_legend=1 00:21:55.510 --rc geninfo_all_blocks=1 00:21:55.510 --rc geninfo_unexecuted_blocks=1 00:21:55.510 00:21:55.510 ' 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:55.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.510 --rc genhtml_branch_coverage=1 00:21:55.510 --rc genhtml_function_coverage=1 00:21:55.510 --rc genhtml_legend=1 00:21:55.510 --rc geninfo_all_blocks=1 00:21:55.510 --rc geninfo_unexecuted_blocks=1 00:21:55.510 00:21:55.510 ' 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:55.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.510 --rc genhtml_branch_coverage=1 00:21:55.510 --rc genhtml_function_coverage=1 00:21:55.510 --rc genhtml_legend=1 00:21:55.510 --rc geninfo_all_blocks=1 00:21:55.510 --rc geninfo_unexecuted_blocks=1 00:21:55.510 00:21:55.510 ' 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:55.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:55.510 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:00.781 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:00.781 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:00.781 Found net devices under 0000:86:00.0: cvl_0_0 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:00.781 Found net devices under 0000:86:00.1: cvl_0_1 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:00.781 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:02.158 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:04.064 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:09.342 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:09.342 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.342 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:09.343 Found net devices under 0000:86:00.0: cvl_0_0 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:09.343 Found net devices under 0000:86:00.1: cvl_0_1 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:22:09.343 00:22:09.343 --- 10.0.0.2 ping statistics --- 00:22:09.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.343 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:22:09.343 00:22:09.343 --- 10.0.0.1 ping statistics --- 00:22:09.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.343 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2040610 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2040610 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2040610 ']' 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.343 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.343 [2024-11-29 13:06:08.905427] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:22:09.343 [2024-11-29 13:06:08.905472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.343 [2024-11-29 13:06:08.972128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.343 [2024-11-29 13:06:09.015863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.343 [2024-11-29 13:06:09.015903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.343 [2024-11-29 13:06:09.015910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.343 [2024-11-29 13:06:09.015918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.343 [2024-11-29 13:06:09.015924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.343 [2024-11-29 13:06:09.017436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.343 [2024-11-29 13:06:09.017537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.343 [2024-11-29 13:06:09.017644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.343 [2024-11-29 13:06:09.017646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.343 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.602 [2024-11-29 13:06:09.227812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.602 Malloc1 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.602 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:09.603 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.603 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.603 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.603 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.603 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.603 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.603 [2024-11-29 13:06:09.288703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.603 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.603 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2040684 00:22:09.603 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:09.603 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:11.506 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:11.506 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.506 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.506 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.506 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:11.506 "tick_rate": 2300000000, 00:22:11.506 "poll_groups": [ 00:22:11.506 { 00:22:11.506 "name": "nvmf_tgt_poll_group_000", 00:22:11.506 "admin_qpairs": 1, 00:22:11.506 "io_qpairs": 1, 00:22:11.506 "current_admin_qpairs": 1, 00:22:11.506 "current_io_qpairs": 1, 00:22:11.506 "pending_bdev_io": 0, 00:22:11.506 "completed_nvme_io": 19483, 00:22:11.506 "transports": [ 00:22:11.506 { 00:22:11.506 "trtype": "TCP" 00:22:11.506 } 00:22:11.506 ] 00:22:11.506 }, 00:22:11.506 { 00:22:11.506 "name": "nvmf_tgt_poll_group_001", 00:22:11.506 "admin_qpairs": 0, 00:22:11.506 "io_qpairs": 1, 00:22:11.506 "current_admin_qpairs": 0, 00:22:11.506 "current_io_qpairs": 1, 00:22:11.506 "pending_bdev_io": 0, 00:22:11.506 "completed_nvme_io": 19911, 00:22:11.506 "transports": [ 00:22:11.506 { 00:22:11.506 "trtype": "TCP" 00:22:11.506 } 00:22:11.506 ] 00:22:11.506 }, 00:22:11.506 { 00:22:11.506 "name": "nvmf_tgt_poll_group_002", 00:22:11.506 "admin_qpairs": 0, 00:22:11.506 "io_qpairs": 1, 00:22:11.506 "current_admin_qpairs": 0, 00:22:11.506 "current_io_qpairs": 1, 00:22:11.506 "pending_bdev_io": 0, 00:22:11.506 "completed_nvme_io": 19762, 00:22:11.506 "transports": [ 00:22:11.506 { 00:22:11.506 "trtype": "TCP" 00:22:11.506 } 00:22:11.506 ] 00:22:11.506 }, 00:22:11.506 { 00:22:11.506 "name": "nvmf_tgt_poll_group_003", 00:22:11.506 "admin_qpairs": 0, 00:22:11.506 "io_qpairs": 1, 00:22:11.506 "current_admin_qpairs": 0, 00:22:11.506 "current_io_qpairs": 1, 00:22:11.506 "pending_bdev_io": 0, 00:22:11.506 "completed_nvme_io": 19381, 00:22:11.506 "transports": [ 00:22:11.506 { 00:22:11.506 "trtype": "TCP" 00:22:11.506 } 00:22:11.506 ] 00:22:11.506 } 00:22:11.506 ] 00:22:11.506 }' 00:22:11.506 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:11.764 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:11.765 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:11.765 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:11.765 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2040684 00:22:19.875 Initializing NVMe Controllers 00:22:19.875 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:19.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:19.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:19.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:19.875 Initialization complete. Launching workers. 00:22:19.875 ======================================================== 00:22:19.875 Latency(us) 00:22:19.875 Device Information : IOPS MiB/s Average min max 00:22:19.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10581.30 41.33 6048.32 2320.95 10419.77 00:22:19.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10793.50 42.16 5928.62 1893.42 10488.09 00:22:19.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10723.10 41.89 5967.60 2009.01 10525.00 00:22:19.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10599.70 41.41 6038.29 1893.36 10493.77 00:22:19.875 ======================================================== 00:22:19.875 Total : 42697.60 166.79 5995.30 1893.36 10525.00 00:22:19.875 00:22:19.875 [2024-11-29 13:06:19.484842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d500 is same with the state(6) to be set 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:19.875 rmmod nvme_tcp 00:22:19.875 rmmod nvme_fabrics 00:22:19.875 rmmod nvme_keyring 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2040610 ']' 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2040610 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2040610 ']' 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2040610 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2040610 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:19.875 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2040610' 00:22:19.875 killing process with pid 2040610 00:22:19.876 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2040610 00:22:19.876 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2040610 00:22:20.135 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:20.135 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:20.135 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:20.135 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:20.136 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:20.136 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:20.136 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:20.136 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:20.136 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:20.136 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.136 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.136 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.672 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:22.672 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:22.672 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:22.672 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:23.273 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:25.873 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:31.155 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:31.156 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:31.156 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:31.156 Found net devices under 0000:86:00.0: cvl_0_0 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:31.156 Found net devices under 0000:86:00.1: cvl_0_1 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:31.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:22:31.156 00:22:31.156 --- 10.0.0.2 ping statistics --- 00:22:31.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.156 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:22:31.156 00:22:31.156 --- 10.0.0.1 ping statistics --- 00:22:31.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.156 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:31.156 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:31.157 net.core.busy_poll = 1 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:31.157 net.core.busy_read = 1 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2044584 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2044584 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2044584 ']' 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.157 [2024-11-29 13:06:30.644102] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:22:31.157 [2024-11-29 13:06:30.644157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.157 [2024-11-29 13:06:30.714783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.157 [2024-11-29 13:06:30.757993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.157 [2024-11-29 13:06:30.758034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.157 [2024-11-29 13:06:30.758041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.157 [2024-11-29 13:06:30.758048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.157 [2024-11-29 13:06:30.758053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.157 [2024-11-29 13:06:30.759538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.157 [2024-11-29 13:06:30.759632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.157 [2024-11-29 13:06:30.759738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.157 [2024-11-29 13:06:30.759739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.157 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.428 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.428 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:31.429 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.429 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.429 [2024-11-29 13:06:30.987157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.429 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.429 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:31.429 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.429 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.429 Malloc1 00:22:31.429 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.429 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:31.429 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.429 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.429 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.429 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:31.429 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.429 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.429 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.429 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.429 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.429 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.430 [2024-11-29 13:06:31.051454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.430 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.430 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2044811 00:22:31.430 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:31.430 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:33.336 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:33.336 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.336 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:33.336 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.336 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:33.336 "tick_rate": 2300000000, 00:22:33.336 "poll_groups": [ 00:22:33.336 { 00:22:33.336 "name": "nvmf_tgt_poll_group_000", 00:22:33.336 "admin_qpairs": 1, 00:22:33.336 "io_qpairs": 2, 00:22:33.336 "current_admin_qpairs": 1, 00:22:33.336 "current_io_qpairs": 2, 00:22:33.336 "pending_bdev_io": 0, 00:22:33.336 "completed_nvme_io": 28074, 00:22:33.336 "transports": [ 00:22:33.336 { 00:22:33.336 "trtype": "TCP" 00:22:33.336 } 00:22:33.336 ] 00:22:33.336 }, 00:22:33.336 { 00:22:33.336 "name": "nvmf_tgt_poll_group_001", 00:22:33.336 "admin_qpairs": 0, 00:22:33.336 "io_qpairs": 2, 00:22:33.336 "current_admin_qpairs": 0, 00:22:33.336 "current_io_qpairs": 2, 00:22:33.336 "pending_bdev_io": 0, 00:22:33.336 "completed_nvme_io": 27514, 00:22:33.336 "transports": [ 00:22:33.336 { 00:22:33.336 "trtype": "TCP" 00:22:33.336 } 00:22:33.336 ] 00:22:33.336 }, 00:22:33.336 { 00:22:33.337 "name": "nvmf_tgt_poll_group_002", 00:22:33.337 "admin_qpairs": 0, 00:22:33.337 "io_qpairs": 0, 00:22:33.337 "current_admin_qpairs": 0, 00:22:33.337 "current_io_qpairs": 0, 00:22:33.337 "pending_bdev_io": 0, 00:22:33.337 "completed_nvme_io": 0, 00:22:33.337 "transports": [ 00:22:33.337 { 00:22:33.337 "trtype": "TCP" 00:22:33.337 } 00:22:33.337 ] 00:22:33.337 }, 00:22:33.337 { 00:22:33.337 "name": "nvmf_tgt_poll_group_003", 00:22:33.337 "admin_qpairs": 0, 00:22:33.337 "io_qpairs": 0, 00:22:33.337 "current_admin_qpairs": 0, 00:22:33.337 "current_io_qpairs": 0, 00:22:33.337 "pending_bdev_io": 0, 00:22:33.337 "completed_nvme_io": 0, 00:22:33.337 "transports": [ 00:22:33.337 { 00:22:33.337 "trtype": "TCP" 00:22:33.337 } 00:22:33.337 ] 00:22:33.337 } 00:22:33.337 ] 00:22:33.337 }' 00:22:33.337 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:33.337 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:33.337 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:33.337 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:33.337 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2044811 00:22:41.456 Initializing NVMe Controllers 00:22:41.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:41.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:41.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:41.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:41.456 Initialization complete. Launching workers. 00:22:41.456 ======================================================== 00:22:41.456 Latency(us) 00:22:41.456 Device Information : IOPS MiB/s Average min max 00:22:41.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7532.80 29.42 8523.28 1373.51 53546.91 00:22:41.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7329.10 28.63 8733.89 1561.61 54034.58 00:22:41.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8682.80 33.92 7394.13 1513.10 55001.49 00:22:41.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6132.70 23.96 10434.73 1471.63 54927.78 00:22:41.456 ======================================================== 00:22:41.456 Total : 29677.39 115.93 8639.93 1373.51 55001.49 00:22:41.456 00:22:41.456 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:41.456 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:41.456 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:41.456 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:41.456 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:41.456 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:41.456 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:41.456 rmmod nvme_tcp 00:22:41.715 rmmod nvme_fabrics 00:22:41.715 rmmod nvme_keyring 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2044584 ']' 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2044584 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2044584 ']' 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2044584 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2044584 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2044584' 00:22:41.715 killing process with pid 2044584 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2044584 00:22:41.715 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2044584 00:22:41.975 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.975 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.975 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.975 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:41.975 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:41.975 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.975 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.975 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.975 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.975 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.975 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.975 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.268 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:45.269 00:22:45.269 real 0m49.605s 00:22:45.269 user 2m43.946s 00:22:45.269 sys 0m9.811s 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.269 ************************************ 00:22:45.269 END TEST nvmf_perf_adq 00:22:45.269 ************************************ 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:45.269 ************************************ 00:22:45.269 START TEST nvmf_shutdown 00:22:45.269 ************************************ 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:45.269 * Looking for test storage... 00:22:45.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:45.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.269 --rc genhtml_branch_coverage=1 00:22:45.269 --rc genhtml_function_coverage=1 00:22:45.269 --rc genhtml_legend=1 00:22:45.269 --rc geninfo_all_blocks=1 00:22:45.269 --rc geninfo_unexecuted_blocks=1 00:22:45.269 00:22:45.269 ' 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:45.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.269 --rc genhtml_branch_coverage=1 00:22:45.269 --rc genhtml_function_coverage=1 00:22:45.269 --rc genhtml_legend=1 00:22:45.269 --rc geninfo_all_blocks=1 00:22:45.269 --rc geninfo_unexecuted_blocks=1 00:22:45.269 00:22:45.269 ' 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:45.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.269 --rc genhtml_branch_coverage=1 00:22:45.269 --rc genhtml_function_coverage=1 00:22:45.269 --rc genhtml_legend=1 00:22:45.269 --rc geninfo_all_blocks=1 00:22:45.269 --rc geninfo_unexecuted_blocks=1 00:22:45.269 00:22:45.269 ' 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:45.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.269 --rc genhtml_branch_coverage=1 00:22:45.269 --rc genhtml_function_coverage=1 00:22:45.269 --rc genhtml_legend=1 00:22:45.269 --rc geninfo_all_blocks=1 00:22:45.269 --rc geninfo_unexecuted_blocks=1 00:22:45.269 00:22:45.269 ' 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.269 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:45.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:45.270 ************************************ 00:22:45.270 START TEST nvmf_shutdown_tc1 00:22:45.270 ************************************ 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:45.270 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.536 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.536 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.536 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.536 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.536 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.536 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:50.537 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:50.537 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:50.537 Found net devices under 0000:86:00.0: cvl_0_0 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:50.537 Found net devices under 0000:86:00.1: cvl_0_1 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.537 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.796 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.796 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.796 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.796 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.796 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.796 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.796 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.796 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:22:50.796 00:22:50.796 --- 10.0.0.2 ping statistics --- 00:22:50.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.796 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:22:50.796 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:22:50.796 00:22:50.796 --- 10.0.0.1 ping statistics --- 00:22:50.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.796 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:22:50.796 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.796 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:50.796 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:50.796 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2050184 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2050184 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2050184 ']' 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.797 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.797 [2024-11-29 13:06:50.595699] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:22:50.797 [2024-11-29 13:06:50.595745] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.056 [2024-11-29 13:06:50.662341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.056 [2024-11-29 13:06:50.708080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.056 [2024-11-29 13:06:50.708118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.056 [2024-11-29 13:06:50.708125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.056 [2024-11-29 13:06:50.708132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.056 [2024-11-29 13:06:50.708137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.056 [2024-11-29 13:06:50.709636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.056 [2024-11-29 13:06:50.709714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.056 [2024-11-29 13:06:50.709831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.056 [2024-11-29 13:06:50.709832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.056 [2024-11-29 13:06:50.847259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.056 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.315 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.315 Malloc1 00:22:51.315 [2024-11-29 13:06:50.951863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.315 Malloc2 00:22:51.315 Malloc3 00:22:51.315 Malloc4 00:22:51.315 Malloc5 00:22:51.575 Malloc6 00:22:51.575 Malloc7 00:22:51.575 Malloc8 00:22:51.575 Malloc9 00:22:51.575 Malloc10 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2050318 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2050318 /var/tmp/bdevperf.sock 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2050318 ']' 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.575 { 00:22:51.575 "params": { 00:22:51.575 "name": "Nvme$subsystem", 00:22:51.575 "trtype": "$TEST_TRANSPORT", 00:22:51.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.575 "adrfam": "ipv4", 00:22:51.575 "trsvcid": "$NVMF_PORT", 00:22:51.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.575 "hdgst": ${hdgst:-false}, 00:22:51.575 "ddgst": ${ddgst:-false} 00:22:51.575 }, 00:22:51.575 "method": "bdev_nvme_attach_controller" 00:22:51.575 } 00:22:51.575 EOF 00:22:51.575 )") 00:22:51.575 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.834 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.834 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.834 { 00:22:51.834 "params": { 00:22:51.835 "name": "Nvme$subsystem", 00:22:51.835 "trtype": "$TEST_TRANSPORT", 00:22:51.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.835 "adrfam": "ipv4", 00:22:51.835 "trsvcid": "$NVMF_PORT", 00:22:51.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.835 "hdgst": ${hdgst:-false}, 00:22:51.835 "ddgst": ${ddgst:-false} 00:22:51.835 }, 00:22:51.835 "method": "bdev_nvme_attach_controller" 00:22:51.835 } 00:22:51.835 EOF 00:22:51.835 )") 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.835 { 00:22:51.835 "params": { 00:22:51.835 "name": "Nvme$subsystem", 00:22:51.835 "trtype": "$TEST_TRANSPORT", 00:22:51.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.835 "adrfam": "ipv4", 00:22:51.835 "trsvcid": "$NVMF_PORT", 00:22:51.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.835 "hdgst": ${hdgst:-false}, 00:22:51.835 "ddgst": ${ddgst:-false} 00:22:51.835 }, 00:22:51.835 "method": "bdev_nvme_attach_controller" 00:22:51.835 } 00:22:51.835 EOF 00:22:51.835 )") 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.835 { 00:22:51.835 "params": { 00:22:51.835 "name": "Nvme$subsystem", 00:22:51.835 "trtype": "$TEST_TRANSPORT", 00:22:51.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.835 "adrfam": "ipv4", 00:22:51.835 "trsvcid": "$NVMF_PORT", 00:22:51.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.835 "hdgst": ${hdgst:-false}, 00:22:51.835 "ddgst": ${ddgst:-false} 00:22:51.835 }, 00:22:51.835 "method": "bdev_nvme_attach_controller" 00:22:51.835 } 00:22:51.835 EOF 00:22:51.835 )") 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.835 { 00:22:51.835 "params": { 00:22:51.835 "name": "Nvme$subsystem", 00:22:51.835 "trtype": "$TEST_TRANSPORT", 00:22:51.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.835 "adrfam": "ipv4", 00:22:51.835 "trsvcid": "$NVMF_PORT", 00:22:51.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.835 "hdgst": ${hdgst:-false}, 00:22:51.835 "ddgst": ${ddgst:-false} 00:22:51.835 }, 00:22:51.835 "method": "bdev_nvme_attach_controller" 00:22:51.835 } 00:22:51.835 EOF 00:22:51.835 )") 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.835 { 00:22:51.835 "params": { 00:22:51.835 "name": "Nvme$subsystem", 00:22:51.835 "trtype": "$TEST_TRANSPORT", 00:22:51.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.835 "adrfam": "ipv4", 00:22:51.835 "trsvcid": "$NVMF_PORT", 00:22:51.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.835 "hdgst": ${hdgst:-false}, 00:22:51.835 "ddgst": ${ddgst:-false} 00:22:51.835 }, 00:22:51.835 "method": "bdev_nvme_attach_controller" 00:22:51.835 } 00:22:51.835 EOF 00:22:51.835 )") 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.835 [2024-11-29 13:06:51.431026] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:22:51.835 [2024-11-29 13:06:51.431073] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.835 { 00:22:51.835 "params": { 00:22:51.835 "name": "Nvme$subsystem", 00:22:51.835 "trtype": "$TEST_TRANSPORT", 00:22:51.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.835 "adrfam": "ipv4", 00:22:51.835 "trsvcid": "$NVMF_PORT", 00:22:51.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.835 "hdgst": ${hdgst:-false}, 00:22:51.835 "ddgst": ${ddgst:-false} 00:22:51.835 }, 00:22:51.835 "method": "bdev_nvme_attach_controller" 00:22:51.835 } 00:22:51.835 EOF 00:22:51.835 )") 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.835 { 00:22:51.835 "params": { 00:22:51.835 "name": "Nvme$subsystem", 00:22:51.835 "trtype": "$TEST_TRANSPORT", 00:22:51.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.835 "adrfam": "ipv4", 00:22:51.835 "trsvcid": "$NVMF_PORT", 00:22:51.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.835 "hdgst": ${hdgst:-false}, 00:22:51.835 "ddgst": ${ddgst:-false} 00:22:51.835 }, 00:22:51.835 "method": "bdev_nvme_attach_controller" 00:22:51.835 } 00:22:51.835 EOF 00:22:51.835 )") 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.835 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.835 { 00:22:51.835 "params": { 00:22:51.835 "name": "Nvme$subsystem", 00:22:51.835 "trtype": "$TEST_TRANSPORT", 00:22:51.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.835 "adrfam": "ipv4", 00:22:51.835 "trsvcid": "$NVMF_PORT", 00:22:51.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.836 "hdgst": ${hdgst:-false}, 00:22:51.836 "ddgst": ${ddgst:-false} 00:22:51.836 }, 00:22:51.836 "method": "bdev_nvme_attach_controller" 00:22:51.836 } 00:22:51.836 EOF 00:22:51.836 )") 00:22:51.836 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.836 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.836 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.836 { 00:22:51.836 "params": { 00:22:51.836 "name": "Nvme$subsystem", 00:22:51.836 "trtype": "$TEST_TRANSPORT", 00:22:51.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.836 "adrfam": "ipv4", 00:22:51.836 "trsvcid": "$NVMF_PORT", 00:22:51.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.836 "hdgst": ${hdgst:-false}, 00:22:51.836 "ddgst": ${ddgst:-false} 00:22:51.836 }, 00:22:51.836 "method": "bdev_nvme_attach_controller" 00:22:51.836 } 00:22:51.836 EOF 00:22:51.836 )") 00:22:51.836 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:51.836 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:51.836 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:51.836 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:51.836 "params": { 00:22:51.836 "name": "Nvme1", 00:22:51.836 "trtype": "tcp", 00:22:51.836 "traddr": "10.0.0.2", 00:22:51.836 "adrfam": "ipv4", 00:22:51.836 "trsvcid": "4420", 00:22:51.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.836 "hdgst": false, 00:22:51.836 "ddgst": false 00:22:51.836 }, 00:22:51.836 "method": "bdev_nvme_attach_controller" 00:22:51.836 },{ 00:22:51.836 "params": { 00:22:51.836 "name": "Nvme2", 00:22:51.836 "trtype": "tcp", 00:22:51.836 "traddr": "10.0.0.2", 00:22:51.836 "adrfam": "ipv4", 00:22:51.836 "trsvcid": "4420", 00:22:51.836 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:51.836 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:51.836 "hdgst": false, 00:22:51.836 "ddgst": false 00:22:51.836 }, 00:22:51.836 "method": "bdev_nvme_attach_controller" 00:22:51.836 },{ 00:22:51.836 "params": { 00:22:51.836 "name": "Nvme3", 00:22:51.836 "trtype": "tcp", 00:22:51.836 "traddr": "10.0.0.2", 00:22:51.836 "adrfam": "ipv4", 00:22:51.836 "trsvcid": "4420", 00:22:51.836 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:51.836 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:51.836 "hdgst": false, 00:22:51.836 "ddgst": false 00:22:51.836 }, 00:22:51.836 "method": "bdev_nvme_attach_controller" 00:22:51.836 },{ 00:22:51.836 "params": { 00:22:51.836 "name": "Nvme4", 00:22:51.836 "trtype": "tcp", 00:22:51.836 "traddr": "10.0.0.2", 00:22:51.836 "adrfam": "ipv4", 00:22:51.836 "trsvcid": "4420", 00:22:51.836 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:51.836 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:51.836 "hdgst": false, 00:22:51.836 "ddgst": false 00:22:51.836 }, 00:22:51.836 "method": "bdev_nvme_attach_controller" 00:22:51.836 },{ 00:22:51.836 "params": { 00:22:51.836 "name": "Nvme5", 00:22:51.836 "trtype": "tcp", 00:22:51.836 "traddr": "10.0.0.2", 00:22:51.836 "adrfam": "ipv4", 00:22:51.836 "trsvcid": "4420", 00:22:51.836 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:51.836 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:51.836 "hdgst": false, 00:22:51.836 "ddgst": false 00:22:51.836 }, 00:22:51.836 "method": "bdev_nvme_attach_controller" 00:22:51.836 },{ 00:22:51.836 "params": { 00:22:51.836 "name": "Nvme6", 00:22:51.836 "trtype": "tcp", 00:22:51.836 "traddr": "10.0.0.2", 00:22:51.836 "adrfam": "ipv4", 00:22:51.836 "trsvcid": "4420", 00:22:51.836 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:51.836 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:51.836 "hdgst": false, 00:22:51.836 "ddgst": false 00:22:51.836 }, 00:22:51.836 "method": "bdev_nvme_attach_controller" 00:22:51.836 },{ 00:22:51.836 "params": { 00:22:51.836 "name": "Nvme7", 00:22:51.836 "trtype": "tcp", 00:22:51.836 "traddr": "10.0.0.2", 00:22:51.836 "adrfam": "ipv4", 00:22:51.836 "trsvcid": "4420", 00:22:51.836 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:51.836 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:51.836 "hdgst": false, 00:22:51.836 "ddgst": false 00:22:51.836 }, 00:22:51.836 "method": "bdev_nvme_attach_controller" 00:22:51.836 },{ 00:22:51.836 "params": { 00:22:51.836 "name": "Nvme8", 00:22:51.836 "trtype": "tcp", 00:22:51.836 "traddr": "10.0.0.2", 00:22:51.836 "adrfam": "ipv4", 00:22:51.836 "trsvcid": "4420", 00:22:51.836 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:51.836 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:51.836 "hdgst": false, 00:22:51.836 "ddgst": false 00:22:51.836 }, 00:22:51.836 "method": "bdev_nvme_attach_controller" 00:22:51.836 },{ 00:22:51.836 "params": { 00:22:51.836 "name": "Nvme9", 00:22:51.836 "trtype": "tcp", 00:22:51.836 "traddr": "10.0.0.2", 00:22:51.836 "adrfam": "ipv4", 00:22:51.836 "trsvcid": "4420", 00:22:51.836 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:51.836 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:51.836 "hdgst": false, 00:22:51.836 "ddgst": false 00:22:51.836 }, 00:22:51.836 "method": "bdev_nvme_attach_controller" 00:22:51.836 },{ 00:22:51.836 "params": { 00:22:51.836 "name": "Nvme10", 00:22:51.836 "trtype": "tcp", 00:22:51.836 "traddr": "10.0.0.2", 00:22:51.836 "adrfam": "ipv4", 00:22:51.836 "trsvcid": "4420", 00:22:51.836 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:51.836 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:51.836 "hdgst": false, 00:22:51.836 "ddgst": false 00:22:51.836 }, 00:22:51.836 "method": "bdev_nvme_attach_controller" 00:22:51.836 }' 00:22:51.836 [2024-11-29 13:06:51.494892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.836 [2024-11-29 13:06:51.536218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.740 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.740 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:53.740 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:53.740 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.740 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:53.740 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.740 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2050318 00:22:53.740 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:53.740 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:54.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2050318 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2050184 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.676 { 00:22:54.676 "params": { 00:22:54.676 "name": "Nvme$subsystem", 00:22:54.676 "trtype": "$TEST_TRANSPORT", 00:22:54.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.676 "adrfam": "ipv4", 00:22:54.676 "trsvcid": "$NVMF_PORT", 00:22:54.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.676 "hdgst": ${hdgst:-false}, 00:22:54.676 "ddgst": ${ddgst:-false} 00:22:54.676 }, 00:22:54.676 "method": "bdev_nvme_attach_controller" 00:22:54.676 } 00:22:54.676 EOF 00:22:54.676 )") 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.676 { 00:22:54.676 "params": { 00:22:54.676 "name": "Nvme$subsystem", 00:22:54.676 "trtype": "$TEST_TRANSPORT", 00:22:54.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.676 "adrfam": "ipv4", 00:22:54.676 "trsvcid": "$NVMF_PORT", 00:22:54.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.676 "hdgst": ${hdgst:-false}, 00:22:54.676 "ddgst": ${ddgst:-false} 00:22:54.676 }, 00:22:54.676 "method": "bdev_nvme_attach_controller" 00:22:54.676 } 00:22:54.676 EOF 00:22:54.676 )") 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.676 { 00:22:54.676 "params": { 00:22:54.676 "name": "Nvme$subsystem", 00:22:54.676 "trtype": "$TEST_TRANSPORT", 00:22:54.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.676 "adrfam": "ipv4", 00:22:54.676 "trsvcid": "$NVMF_PORT", 00:22:54.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.676 "hdgst": ${hdgst:-false}, 00:22:54.676 "ddgst": ${ddgst:-false} 00:22:54.676 }, 00:22:54.676 "method": "bdev_nvme_attach_controller" 00:22:54.676 } 00:22:54.676 EOF 00:22:54.676 )") 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.676 { 00:22:54.676 "params": { 00:22:54.676 "name": "Nvme$subsystem", 00:22:54.676 "trtype": "$TEST_TRANSPORT", 00:22:54.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.676 "adrfam": "ipv4", 00:22:54.676 "trsvcid": "$NVMF_PORT", 00:22:54.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.676 "hdgst": ${hdgst:-false}, 00:22:54.676 "ddgst": ${ddgst:-false} 00:22:54.676 }, 00:22:54.676 "method": "bdev_nvme_attach_controller" 00:22:54.676 } 00:22:54.676 EOF 00:22:54.676 )") 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.676 { 00:22:54.676 "params": { 00:22:54.676 "name": "Nvme$subsystem", 00:22:54.676 "trtype": "$TEST_TRANSPORT", 00:22:54.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.676 "adrfam": "ipv4", 00:22:54.676 "trsvcid": "$NVMF_PORT", 00:22:54.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.676 "hdgst": ${hdgst:-false}, 00:22:54.676 "ddgst": ${ddgst:-false} 00:22:54.676 }, 00:22:54.676 "method": "bdev_nvme_attach_controller" 00:22:54.676 } 00:22:54.676 EOF 00:22:54.676 )") 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.676 { 00:22:54.676 "params": { 00:22:54.676 "name": "Nvme$subsystem", 00:22:54.676 "trtype": "$TEST_TRANSPORT", 00:22:54.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.676 "adrfam": "ipv4", 00:22:54.676 "trsvcid": "$NVMF_PORT", 00:22:54.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.676 "hdgst": ${hdgst:-false}, 00:22:54.676 "ddgst": ${ddgst:-false} 00:22:54.676 }, 00:22:54.676 "method": "bdev_nvme_attach_controller" 00:22:54.676 } 00:22:54.676 EOF 00:22:54.676 )") 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.676 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.676 { 00:22:54.676 "params": { 00:22:54.676 "name": "Nvme$subsystem", 00:22:54.676 "trtype": "$TEST_TRANSPORT", 00:22:54.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.676 "adrfam": "ipv4", 00:22:54.676 "trsvcid": "$NVMF_PORT", 00:22:54.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.676 "hdgst": ${hdgst:-false}, 00:22:54.676 "ddgst": ${ddgst:-false} 00:22:54.676 }, 00:22:54.676 "method": "bdev_nvme_attach_controller" 00:22:54.676 } 00:22:54.676 EOF 00:22:54.676 )") 00:22:54.677 [2024-11-29 13:06:54.370652] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:22:54.677 [2024-11-29 13:06:54.370701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2050810 ] 00:22:54.677 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.677 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.677 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.677 { 00:22:54.677 "params": { 00:22:54.677 "name": "Nvme$subsystem", 00:22:54.677 "trtype": "$TEST_TRANSPORT", 00:22:54.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.677 "adrfam": "ipv4", 00:22:54.677 "trsvcid": "$NVMF_PORT", 00:22:54.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.677 "hdgst": ${hdgst:-false}, 00:22:54.677 "ddgst": ${ddgst:-false} 00:22:54.677 }, 00:22:54.677 "method": "bdev_nvme_attach_controller" 00:22:54.677 } 00:22:54.677 EOF 00:22:54.677 )") 00:22:54.677 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.677 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.677 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.677 { 00:22:54.677 "params": { 00:22:54.677 "name": "Nvme$subsystem", 00:22:54.677 "trtype": "$TEST_TRANSPORT", 00:22:54.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.677 "adrfam": "ipv4", 00:22:54.677 "trsvcid": "$NVMF_PORT", 00:22:54.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.677 "hdgst": ${hdgst:-false}, 00:22:54.677 "ddgst": ${ddgst:-false} 00:22:54.677 }, 00:22:54.677 "method": "bdev_nvme_attach_controller" 00:22:54.677 } 00:22:54.677 EOF 00:22:54.677 )") 00:22:54.677 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.677 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:54.677 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:54.677 { 00:22:54.677 "params": { 00:22:54.677 "name": "Nvme$subsystem", 00:22:54.677 "trtype": "$TEST_TRANSPORT", 00:22:54.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.677 "adrfam": "ipv4", 00:22:54.677 "trsvcid": "$NVMF_PORT", 00:22:54.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.677 "hdgst": ${hdgst:-false}, 00:22:54.677 "ddgst": ${ddgst:-false} 00:22:54.677 }, 00:22:54.677 "method": "bdev_nvme_attach_controller" 00:22:54.677 } 00:22:54.677 EOF 00:22:54.677 )") 00:22:54.677 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:54.677 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:54.677 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:54.677 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:54.677 "params": { 00:22:54.677 "name": "Nvme1", 00:22:54.677 "trtype": "tcp", 00:22:54.677 "traddr": "10.0.0.2", 00:22:54.677 "adrfam": "ipv4", 00:22:54.677 "trsvcid": "4420", 00:22:54.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.677 "hdgst": false, 00:22:54.677 "ddgst": false 00:22:54.677 }, 00:22:54.677 "method": "bdev_nvme_attach_controller" 00:22:54.677 },{ 00:22:54.677 "params": { 00:22:54.677 "name": "Nvme2", 00:22:54.677 "trtype": "tcp", 00:22:54.677 "traddr": "10.0.0.2", 00:22:54.677 "adrfam": "ipv4", 00:22:54.677 "trsvcid": "4420", 00:22:54.677 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:54.677 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:54.677 "hdgst": false, 00:22:54.677 "ddgst": false 00:22:54.677 }, 00:22:54.677 "method": "bdev_nvme_attach_controller" 00:22:54.677 },{ 00:22:54.677 "params": { 00:22:54.677 "name": "Nvme3", 00:22:54.677 "trtype": "tcp", 00:22:54.677 "traddr": "10.0.0.2", 00:22:54.677 "adrfam": "ipv4", 00:22:54.677 "trsvcid": "4420", 00:22:54.677 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:54.677 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:54.677 "hdgst": false, 00:22:54.677 "ddgst": false 00:22:54.677 }, 00:22:54.677 "method": "bdev_nvme_attach_controller" 00:22:54.677 },{ 00:22:54.677 "params": { 00:22:54.677 "name": "Nvme4", 00:22:54.677 "trtype": "tcp", 00:22:54.677 "traddr": "10.0.0.2", 00:22:54.677 "adrfam": "ipv4", 00:22:54.677 "trsvcid": "4420", 00:22:54.677 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:54.677 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:54.677 "hdgst": false, 00:22:54.677 "ddgst": false 00:22:54.677 }, 00:22:54.677 "method": "bdev_nvme_attach_controller" 00:22:54.677 },{ 00:22:54.677 "params": { 00:22:54.677 "name": "Nvme5", 00:22:54.677 "trtype": "tcp", 00:22:54.677 "traddr": "10.0.0.2", 00:22:54.677 "adrfam": "ipv4", 00:22:54.677 "trsvcid": "4420", 00:22:54.677 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:54.677 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:54.677 "hdgst": false, 00:22:54.677 "ddgst": false 00:22:54.677 }, 00:22:54.677 "method": "bdev_nvme_attach_controller" 00:22:54.677 },{ 00:22:54.677 "params": { 00:22:54.677 "name": "Nvme6", 00:22:54.677 "trtype": "tcp", 00:22:54.677 "traddr": "10.0.0.2", 00:22:54.677 "adrfam": "ipv4", 00:22:54.677 "trsvcid": "4420", 00:22:54.677 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:54.677 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:54.677 "hdgst": false, 00:22:54.677 "ddgst": false 00:22:54.677 }, 00:22:54.677 "method": "bdev_nvme_attach_controller" 00:22:54.677 },{ 00:22:54.677 "params": { 00:22:54.677 "name": "Nvme7", 00:22:54.677 "trtype": "tcp", 00:22:54.677 "traddr": "10.0.0.2", 00:22:54.677 "adrfam": "ipv4", 00:22:54.677 "trsvcid": "4420", 00:22:54.677 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:54.677 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:54.677 "hdgst": false, 00:22:54.677 "ddgst": false 00:22:54.677 }, 00:22:54.677 "method": "bdev_nvme_attach_controller" 00:22:54.677 },{ 00:22:54.677 "params": { 00:22:54.677 "name": "Nvme8", 00:22:54.677 "trtype": "tcp", 00:22:54.677 "traddr": "10.0.0.2", 00:22:54.677 "adrfam": "ipv4", 00:22:54.677 "trsvcid": "4420", 00:22:54.677 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:54.677 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:54.677 "hdgst": false, 00:22:54.677 "ddgst": false 00:22:54.677 }, 00:22:54.677 "method": "bdev_nvme_attach_controller" 00:22:54.677 },{ 00:22:54.677 "params": { 00:22:54.677 "name": "Nvme9", 00:22:54.677 "trtype": "tcp", 00:22:54.677 "traddr": "10.0.0.2", 00:22:54.677 "adrfam": "ipv4", 00:22:54.677 "trsvcid": "4420", 00:22:54.677 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:54.677 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:54.677 "hdgst": false, 00:22:54.677 "ddgst": false 00:22:54.677 }, 00:22:54.677 "method": "bdev_nvme_attach_controller" 00:22:54.677 },{ 00:22:54.677 "params": { 00:22:54.677 "name": "Nvme10", 00:22:54.677 "trtype": "tcp", 00:22:54.677 "traddr": "10.0.0.2", 00:22:54.677 "adrfam": "ipv4", 00:22:54.677 "trsvcid": "4420", 00:22:54.677 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:54.677 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:54.677 "hdgst": false, 00:22:54.677 "ddgst": false 00:22:54.677 }, 00:22:54.677 "method": "bdev_nvme_attach_controller" 00:22:54.677 }' 00:22:54.677 [2024-11-29 13:06:54.433818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.677 [2024-11-29 13:06:54.475069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.052 Running I/O for 1 seconds... 00:22:57.430 2205.00 IOPS, 137.81 MiB/s 00:22:57.430 Latency(us) 00:22:57.430 [2024-11-29T12:06:57.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.430 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.430 Verification LBA range: start 0x0 length 0x400 00:22:57.430 Nvme1n1 : 1.17 273.24 17.08 0.00 0.00 232176.06 18236.10 218833.25 00:22:57.430 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.430 Verification LBA range: start 0x0 length 0x400 00:22:57.430 Nvme2n1 : 1.10 240.44 15.03 0.00 0.00 253822.59 11055.64 220656.86 00:22:57.430 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.430 Verification LBA range: start 0x0 length 0x400 00:22:57.430 Nvme3n1 : 1.13 292.96 18.31 0.00 0.00 205769.31 9687.93 208803.39 00:22:57.430 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.430 Verification LBA range: start 0x0 length 0x400 00:22:57.430 Nvme4n1 : 1.16 280.85 17.55 0.00 0.00 216035.24 4786.98 221568.67 00:22:57.430 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.430 Verification LBA range: start 0x0 length 0x400 00:22:57.430 Nvme5n1 : 1.18 271.53 16.97 0.00 0.00 220838.33 18578.03 233422.14 00:22:57.430 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.430 Verification LBA range: start 0x0 length 0x400 00:22:57.430 Nvme6n1 : 1.16 274.74 17.17 0.00 0.00 215005.81 31685.23 203332.56 00:22:57.431 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.431 Verification LBA range: start 0x0 length 0x400 00:22:57.431 Nvme7n1 : 1.18 272.08 17.00 0.00 0.00 214062.66 12366.36 228863.11 00:22:57.431 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.431 Verification LBA range: start 0x0 length 0x400 00:22:57.431 Nvme8n1 : 1.17 274.09 17.13 0.00 0.00 209130.36 14075.99 232510.33 00:22:57.431 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.431 Verification LBA range: start 0x0 length 0x400 00:22:57.431 Nvme9n1 : 1.19 269.81 16.86 0.00 0.00 209735.32 16754.42 225215.89 00:22:57.431 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:57.431 Verification LBA range: start 0x0 length 0x400 00:22:57.431 Nvme10n1 : 1.18 270.52 16.91 0.00 0.00 205973.77 17096.35 238892.97 00:22:57.431 [2024-11-29T12:06:57.251Z] =================================================================================================================== 00:22:57.431 [2024-11-29T12:06:57.251Z] Total : 2720.26 170.02 0.00 0.00 217592.87 4786.98 238892.97 00:22:57.431 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:57.431 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:57.431 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:57.431 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.431 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:57.431 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.431 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:57.431 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.431 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:57.431 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.431 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.431 rmmod nvme_tcp 00:22:57.689 rmmod nvme_fabrics 00:22:57.689 rmmod nvme_keyring 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2050184 ']' 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2050184 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2050184 ']' 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2050184 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2050184 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2050184' 00:22:57.689 killing process with pid 2050184 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2050184 00:22:57.689 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2050184 00:22:57.948 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.948 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.948 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.948 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:57.948 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.948 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:57.948 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.948 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.948 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.948 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.948 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.948 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:00.481 00:23:00.481 real 0m14.870s 00:23:00.481 user 0m33.844s 00:23:00.481 sys 0m5.508s 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.481 ************************************ 00:23:00.481 END TEST nvmf_shutdown_tc1 00:23:00.481 ************************************ 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:00.481 ************************************ 00:23:00.481 START TEST nvmf_shutdown_tc2 00:23:00.481 ************************************ 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:00.481 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:00.482 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:00.482 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:00.482 Found net devices under 0000:86:00.0: cvl_0_0 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:00.482 Found net devices under 0000:86:00.1: cvl_0_1 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:00.482 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.482 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.482 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.482 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:00.482 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:00.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:23:00.482 00:23:00.482 --- 10.0.0.2 ping statistics --- 00:23:00.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.482 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:23:00.482 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:23:00.483 00:23:00.483 --- 10.0.0.1 ping statistics --- 00:23:00.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.483 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2051830 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2051830 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2051830 ']' 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.483 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.483 [2024-11-29 13:07:00.176295] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:23:00.483 [2024-11-29 13:07:00.176341] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.483 [2024-11-29 13:07:00.243973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.483 [2024-11-29 13:07:00.289294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.483 [2024-11-29 13:07:00.289328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.483 [2024-11-29 13:07:00.289336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.483 [2024-11-29 13:07:00.289345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.483 [2024-11-29 13:07:00.289351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.483 [2024-11-29 13:07:00.290783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.483 [2024-11-29 13:07:00.290871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.483 [2024-11-29 13:07:00.290903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:00.483 [2024-11-29 13:07:00.290904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.742 [2024-11-29 13:07:00.428994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.742 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.742 Malloc1 00:23:00.742 [2024-11-29 13:07:00.540453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.000 Malloc2 00:23:01.000 Malloc3 00:23:01.000 Malloc4 00:23:01.000 Malloc5 00:23:01.000 Malloc6 00:23:01.000 Malloc7 00:23:01.259 Malloc8 00:23:01.259 Malloc9 00:23:01.259 Malloc10 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2052101 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2052101 /var/tmp/bdevperf.sock 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2052101 ']' 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:01.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.259 { 00:23:01.259 "params": { 00:23:01.259 "name": "Nvme$subsystem", 00:23:01.259 "trtype": "$TEST_TRANSPORT", 00:23:01.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.259 "adrfam": "ipv4", 00:23:01.259 "trsvcid": "$NVMF_PORT", 00:23:01.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.259 "hdgst": ${hdgst:-false}, 00:23:01.259 "ddgst": ${ddgst:-false} 00:23:01.259 }, 00:23:01.259 "method": "bdev_nvme_attach_controller" 00:23:01.259 } 00:23:01.259 EOF 00:23:01.259 )") 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.259 { 00:23:01.259 "params": { 00:23:01.259 "name": "Nvme$subsystem", 00:23:01.259 "trtype": "$TEST_TRANSPORT", 00:23:01.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.259 "adrfam": "ipv4", 00:23:01.259 "trsvcid": "$NVMF_PORT", 00:23:01.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.259 "hdgst": ${hdgst:-false}, 00:23:01.259 "ddgst": ${ddgst:-false} 00:23:01.259 }, 00:23:01.259 "method": "bdev_nvme_attach_controller" 00:23:01.259 } 00:23:01.259 EOF 00:23:01.259 )") 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.259 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.259 { 00:23:01.259 "params": { 00:23:01.259 "name": "Nvme$subsystem", 00:23:01.259 "trtype": "$TEST_TRANSPORT", 00:23:01.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.260 "adrfam": "ipv4", 00:23:01.260 "trsvcid": "$NVMF_PORT", 00:23:01.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.260 "hdgst": ${hdgst:-false}, 00:23:01.260 "ddgst": ${ddgst:-false} 00:23:01.260 }, 00:23:01.260 "method": "bdev_nvme_attach_controller" 00:23:01.260 } 00:23:01.260 EOF 00:23:01.260 )") 00:23:01.260 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.260 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.260 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.260 { 00:23:01.260 "params": { 00:23:01.260 "name": "Nvme$subsystem", 00:23:01.260 "trtype": "$TEST_TRANSPORT", 00:23:01.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.260 "adrfam": "ipv4", 00:23:01.260 "trsvcid": "$NVMF_PORT", 00:23:01.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.260 "hdgst": ${hdgst:-false}, 00:23:01.260 "ddgst": ${ddgst:-false} 00:23:01.260 }, 00:23:01.260 "method": "bdev_nvme_attach_controller" 00:23:01.260 } 00:23:01.260 EOF 00:23:01.260 )") 00:23:01.260 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.260 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.260 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.260 { 00:23:01.260 "params": { 00:23:01.260 "name": "Nvme$subsystem", 00:23:01.260 "trtype": "$TEST_TRANSPORT", 00:23:01.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.260 "adrfam": "ipv4", 00:23:01.260 "trsvcid": "$NVMF_PORT", 00:23:01.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.260 "hdgst": ${hdgst:-false}, 00:23:01.260 "ddgst": ${ddgst:-false} 00:23:01.260 }, 00:23:01.260 "method": "bdev_nvme_attach_controller" 00:23:01.260 } 00:23:01.260 EOF 00:23:01.260 )") 00:23:01.260 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.260 { 00:23:01.260 "params": { 00:23:01.260 "name": "Nvme$subsystem", 00:23:01.260 "trtype": "$TEST_TRANSPORT", 00:23:01.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.260 "adrfam": "ipv4", 00:23:01.260 "trsvcid": "$NVMF_PORT", 00:23:01.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.260 "hdgst": ${hdgst:-false}, 00:23:01.260 "ddgst": ${ddgst:-false} 00:23:01.260 }, 00:23:01.260 "method": "bdev_nvme_attach_controller" 00:23:01.260 } 00:23:01.260 EOF 00:23:01.260 )") 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.260 { 00:23:01.260 "params": { 00:23:01.260 "name": "Nvme$subsystem", 00:23:01.260 "trtype": "$TEST_TRANSPORT", 00:23:01.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.260 "adrfam": "ipv4", 00:23:01.260 "trsvcid": "$NVMF_PORT", 00:23:01.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.260 "hdgst": ${hdgst:-false}, 00:23:01.260 "ddgst": ${ddgst:-false} 00:23:01.260 }, 00:23:01.260 "method": "bdev_nvme_attach_controller" 00:23:01.260 } 00:23:01.260 EOF 00:23:01.260 )") 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.260 [2024-11-29 13:07:01.016647] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:23:01.260 [2024-11-29 13:07:01.016693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052101 ] 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.260 { 00:23:01.260 "params": { 00:23:01.260 "name": "Nvme$subsystem", 00:23:01.260 "trtype": "$TEST_TRANSPORT", 00:23:01.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.260 "adrfam": "ipv4", 00:23:01.260 "trsvcid": "$NVMF_PORT", 00:23:01.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.260 "hdgst": ${hdgst:-false}, 00:23:01.260 "ddgst": ${ddgst:-false} 00:23:01.260 }, 00:23:01.260 "method": "bdev_nvme_attach_controller" 00:23:01.260 } 00:23:01.260 EOF 00:23:01.260 )") 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.260 { 00:23:01.260 "params": { 00:23:01.260 "name": "Nvme$subsystem", 00:23:01.260 "trtype": "$TEST_TRANSPORT", 00:23:01.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.260 "adrfam": "ipv4", 00:23:01.260 "trsvcid": "$NVMF_PORT", 00:23:01.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.260 "hdgst": ${hdgst:-false}, 00:23:01.260 "ddgst": ${ddgst:-false} 00:23:01.260 }, 00:23:01.260 "method": "bdev_nvme_attach_controller" 00:23:01.260 } 00:23:01.260 EOF 00:23:01.260 )") 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.260 { 00:23:01.260 "params": { 00:23:01.260 "name": "Nvme$subsystem", 00:23:01.260 "trtype": "$TEST_TRANSPORT", 00:23:01.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.260 "adrfam": "ipv4", 00:23:01.260 "trsvcid": "$NVMF_PORT", 00:23:01.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.260 "hdgst": ${hdgst:-false}, 00:23:01.260 "ddgst": ${ddgst:-false} 00:23:01.260 }, 00:23:01.260 "method": "bdev_nvme_attach_controller" 00:23:01.260 } 00:23:01.260 EOF 00:23:01.260 )") 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:01.260 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:01.260 "params": { 00:23:01.260 "name": "Nvme1", 00:23:01.260 "trtype": "tcp", 00:23:01.260 "traddr": "10.0.0.2", 00:23:01.260 "adrfam": "ipv4", 00:23:01.260 "trsvcid": "4420", 00:23:01.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.260 "hdgst": false, 00:23:01.260 "ddgst": false 00:23:01.260 }, 00:23:01.260 "method": "bdev_nvme_attach_controller" 00:23:01.260 },{ 00:23:01.260 "params": { 00:23:01.260 "name": "Nvme2", 00:23:01.260 "trtype": "tcp", 00:23:01.260 "traddr": "10.0.0.2", 00:23:01.260 "adrfam": "ipv4", 00:23:01.260 "trsvcid": "4420", 00:23:01.260 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.260 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.260 "hdgst": false, 00:23:01.260 "ddgst": false 00:23:01.260 }, 00:23:01.260 "method": "bdev_nvme_attach_controller" 00:23:01.260 },{ 00:23:01.260 "params": { 00:23:01.260 "name": "Nvme3", 00:23:01.260 "trtype": "tcp", 00:23:01.260 "traddr": "10.0.0.2", 00:23:01.260 "adrfam": "ipv4", 00:23:01.260 "trsvcid": "4420", 00:23:01.260 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:01.260 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:01.260 "hdgst": false, 00:23:01.260 "ddgst": false 00:23:01.260 }, 00:23:01.260 "method": "bdev_nvme_attach_controller" 00:23:01.260 },{ 00:23:01.260 "params": { 00:23:01.260 "name": "Nvme4", 00:23:01.260 "trtype": "tcp", 00:23:01.260 "traddr": "10.0.0.2", 00:23:01.260 "adrfam": "ipv4", 00:23:01.260 "trsvcid": "4420", 00:23:01.260 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:01.260 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:01.260 "hdgst": false, 00:23:01.260 "ddgst": false 00:23:01.260 }, 00:23:01.260 "method": "bdev_nvme_attach_controller" 00:23:01.260 },{ 00:23:01.260 "params": { 00:23:01.260 "name": "Nvme5", 00:23:01.260 "trtype": "tcp", 00:23:01.260 "traddr": "10.0.0.2", 00:23:01.260 "adrfam": "ipv4", 00:23:01.260 "trsvcid": "4420", 00:23:01.261 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:01.261 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:01.261 "hdgst": false, 00:23:01.261 "ddgst": false 00:23:01.261 }, 00:23:01.261 "method": "bdev_nvme_attach_controller" 00:23:01.261 },{ 00:23:01.261 "params": { 00:23:01.261 "name": "Nvme6", 00:23:01.261 "trtype": "tcp", 00:23:01.261 "traddr": "10.0.0.2", 00:23:01.261 "adrfam": "ipv4", 00:23:01.261 "trsvcid": "4420", 00:23:01.261 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:01.261 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:01.261 "hdgst": false, 00:23:01.261 "ddgst": false 00:23:01.261 }, 00:23:01.261 "method": "bdev_nvme_attach_controller" 00:23:01.261 },{ 00:23:01.261 "params": { 00:23:01.261 "name": "Nvme7", 00:23:01.261 "trtype": "tcp", 00:23:01.261 "traddr": "10.0.0.2", 00:23:01.261 "adrfam": "ipv4", 00:23:01.261 "trsvcid": "4420", 00:23:01.261 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:01.261 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:01.261 "hdgst": false, 00:23:01.261 "ddgst": false 00:23:01.261 }, 00:23:01.261 "method": "bdev_nvme_attach_controller" 00:23:01.261 },{ 00:23:01.261 "params": { 00:23:01.261 "name": "Nvme8", 00:23:01.261 "trtype": "tcp", 00:23:01.261 "traddr": "10.0.0.2", 00:23:01.261 "adrfam": "ipv4", 00:23:01.261 "trsvcid": "4420", 00:23:01.261 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:01.261 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:01.261 "hdgst": false, 00:23:01.261 "ddgst": false 00:23:01.261 }, 00:23:01.261 "method": "bdev_nvme_attach_controller" 00:23:01.261 },{ 00:23:01.261 "params": { 00:23:01.261 "name": "Nvme9", 00:23:01.261 "trtype": "tcp", 00:23:01.261 "traddr": "10.0.0.2", 00:23:01.261 "adrfam": "ipv4", 00:23:01.261 "trsvcid": "4420", 00:23:01.261 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:01.261 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:01.261 "hdgst": false, 00:23:01.261 "ddgst": false 00:23:01.261 }, 00:23:01.261 "method": "bdev_nvme_attach_controller" 00:23:01.261 },{ 00:23:01.261 "params": { 00:23:01.261 "name": "Nvme10", 00:23:01.261 "trtype": "tcp", 00:23:01.261 "traddr": "10.0.0.2", 00:23:01.261 "adrfam": "ipv4", 00:23:01.261 "trsvcid": "4420", 00:23:01.261 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:01.261 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:01.261 "hdgst": false, 00:23:01.261 "ddgst": false 00:23:01.261 }, 00:23:01.261 "method": "bdev_nvme_attach_controller" 00:23:01.261 }' 00:23:01.519 [2024-11-29 13:07:01.079687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.519 [2024-11-29 13:07:01.121038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.894 Running I/O for 10 seconds... 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.161 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.162 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.162 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.162 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.162 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:03.162 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:03.162 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:03.421 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:03.421 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:03.421 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.421 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.421 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.421 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2052101 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2052101 ']' 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2052101 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2052101 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2052101' 00:23:03.680 killing process with pid 2052101 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2052101 00:23:03.680 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2052101 00:23:03.680 Received shutdown signal, test time was about 0.825355 seconds 00:23:03.680 00:23:03.680 Latency(us) 00:23:03.680 [2024-11-29T12:07:03.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.680 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.680 Verification LBA range: start 0x0 length 0x400 00:23:03.680 Nvme1n1 : 0.82 310.42 19.40 0.00 0.00 203694.53 16526.47 220656.86 00:23:03.680 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.680 Verification LBA range: start 0x0 length 0x400 00:23:03.680 Nvme2n1 : 0.81 242.90 15.18 0.00 0.00 254132.29 1218.11 224304.08 00:23:03.680 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.680 Verification LBA range: start 0x0 length 0x400 00:23:03.680 Nvme3n1 : 0.82 312.82 19.55 0.00 0.00 194089.18 15614.66 221568.67 00:23:03.680 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.680 Verification LBA range: start 0x0 length 0x400 00:23:03.680 Nvme4n1 : 0.81 322.81 20.18 0.00 0.00 182747.56 6354.14 208803.39 00:23:03.680 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.680 Verification LBA range: start 0x0 length 0x400 00:23:03.680 Nvme5n1 : 0.81 237.10 14.82 0.00 0.00 245490.79 19261.89 238892.97 00:23:03.680 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.680 Verification LBA range: start 0x0 length 0x400 00:23:03.680 Nvme6n1 : 0.82 311.71 19.48 0.00 0.00 182834.75 16640.45 207891.59 00:23:03.680 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.680 Verification LBA range: start 0x0 length 0x400 00:23:03.680 Nvme7n1 : 0.79 242.61 15.16 0.00 0.00 228539.29 17666.23 213362.42 00:23:03.680 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.680 Verification LBA range: start 0x0 length 0x400 00:23:03.680 Nvme8n1 : 0.80 240.69 15.04 0.00 0.00 225231.32 15728.64 219745.06 00:23:03.680 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.680 Verification LBA range: start 0x0 length 0x400 00:23:03.680 Nvme9n1 : 0.79 243.43 15.21 0.00 0.00 217307.94 19717.79 224304.08 00:23:03.680 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.680 Verification LBA range: start 0x0 length 0x400 00:23:03.680 Nvme10n1 : 0.81 235.94 14.75 0.00 0.00 220333.34 19489.84 251658.24 00:23:03.680 [2024-11-29T12:07:03.501Z] =================================================================================================================== 00:23:03.681 [2024-11-29T12:07:03.501Z] Total : 2700.42 168.78 0.00 0.00 212540.45 1218.11 251658.24 00:23:03.940 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2051830 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.874 rmmod nvme_tcp 00:23:04.874 rmmod nvme_fabrics 00:23:04.874 rmmod nvme_keyring 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2051830 ']' 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2051830 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2051830 ']' 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2051830 00:23:04.874 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:04.875 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.875 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2051830 00:23:04.875 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:04.875 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:04.875 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2051830' 00:23:04.875 killing process with pid 2051830 00:23:04.875 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2051830 00:23:04.875 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2051830 00:23:05.442 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:05.442 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:05.442 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:05.442 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:05.442 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:05.442 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:05.442 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:05.442 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.442 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:05.442 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.442 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.442 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.345 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:07.345 00:23:07.346 real 0m7.259s 00:23:07.346 user 0m21.394s 00:23:07.346 sys 0m1.288s 00:23:07.346 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:07.346 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.346 ************************************ 00:23:07.346 END TEST nvmf_shutdown_tc2 00:23:07.346 ************************************ 00:23:07.346 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:07.346 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:07.346 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:07.346 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.605 ************************************ 00:23:07.605 START TEST nvmf_shutdown_tc3 00:23:07.605 ************************************ 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:07.605 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:07.605 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.605 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:07.605 Found net devices under 0000:86:00.0: cvl_0_0 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:07.606 Found net devices under 0000:86:00.1: cvl_0_1 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:23:07.606 00:23:07.606 --- 10.0.0.2 ping statistics --- 00:23:07.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.606 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:23:07.606 00:23:07.606 --- 10.0.0.1 ping statistics --- 00:23:07.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.606 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.606 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.865 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:07.865 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.865 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.865 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.865 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:07.865 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2053280 00:23:07.865 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2053280 00:23:07.865 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2053280 ']' 00:23:07.865 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.865 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.865 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.865 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.865 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:07.865 [2024-11-29 13:07:07.510057] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:23:07.865 [2024-11-29 13:07:07.510101] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.865 [2024-11-29 13:07:07.576762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.865 [2024-11-29 13:07:07.617982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.865 [2024-11-29 13:07:07.618023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.865 [2024-11-29 13:07:07.618030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.865 [2024-11-29 13:07:07.618035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.865 [2024-11-29 13:07:07.618040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.865 [2024-11-29 13:07:07.619701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.865 [2024-11-29 13:07:07.619801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.865 [2024-11-29 13:07:07.619899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.865 [2024-11-29 13:07:07.619900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.125 [2024-11-29 13:07:07.765682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.125 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.126 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.126 Malloc1 00:23:08.126 [2024-11-29 13:07:07.873687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.126 Malloc2 00:23:08.126 Malloc3 00:23:08.385 Malloc4 00:23:08.385 Malloc5 00:23:08.385 Malloc6 00:23:08.385 Malloc7 00:23:08.385 Malloc8 00:23:08.385 Malloc9 00:23:08.645 Malloc10 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2053421 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2053421 /var/tmp/bdevperf.sock 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2053421 ']' 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.645 { 00:23:08.645 "params": { 00:23:08.645 "name": "Nvme$subsystem", 00:23:08.645 "trtype": "$TEST_TRANSPORT", 00:23:08.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.645 "adrfam": "ipv4", 00:23:08.645 "trsvcid": "$NVMF_PORT", 00:23:08.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.645 "hdgst": ${hdgst:-false}, 00:23:08.645 "ddgst": ${ddgst:-false} 00:23:08.645 }, 00:23:08.645 "method": "bdev_nvme_attach_controller" 00:23:08.645 } 00:23:08.645 EOF 00:23:08.645 )") 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.645 { 00:23:08.645 "params": { 00:23:08.645 "name": "Nvme$subsystem", 00:23:08.645 "trtype": "$TEST_TRANSPORT", 00:23:08.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.645 "adrfam": "ipv4", 00:23:08.645 "trsvcid": "$NVMF_PORT", 00:23:08.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.645 "hdgst": ${hdgst:-false}, 00:23:08.645 "ddgst": ${ddgst:-false} 00:23:08.645 }, 00:23:08.645 "method": "bdev_nvme_attach_controller" 00:23:08.645 } 00:23:08.645 EOF 00:23:08.645 )") 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.645 { 00:23:08.645 "params": { 00:23:08.645 "name": "Nvme$subsystem", 00:23:08.645 "trtype": "$TEST_TRANSPORT", 00:23:08.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.645 "adrfam": "ipv4", 00:23:08.645 "trsvcid": "$NVMF_PORT", 00:23:08.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.645 "hdgst": ${hdgst:-false}, 00:23:08.645 "ddgst": ${ddgst:-false} 00:23:08.645 }, 00:23:08.645 "method": "bdev_nvme_attach_controller" 00:23:08.645 } 00:23:08.645 EOF 00:23:08.645 )") 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.645 { 00:23:08.645 "params": { 00:23:08.645 "name": "Nvme$subsystem", 00:23:08.645 "trtype": "$TEST_TRANSPORT", 00:23:08.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.645 "adrfam": "ipv4", 00:23:08.645 "trsvcid": "$NVMF_PORT", 00:23:08.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.645 "hdgst": ${hdgst:-false}, 00:23:08.645 "ddgst": ${ddgst:-false} 00:23:08.645 }, 00:23:08.645 "method": "bdev_nvme_attach_controller" 00:23:08.645 } 00:23:08.645 EOF 00:23:08.645 )") 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.645 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.646 { 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme$subsystem", 00:23:08.646 "trtype": "$TEST_TRANSPORT", 00:23:08.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "$NVMF_PORT", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.646 "hdgst": ${hdgst:-false}, 00:23:08.646 "ddgst": ${ddgst:-false} 00:23:08.646 }, 00:23:08.646 "method": "bdev_nvme_attach_controller" 00:23:08.646 } 00:23:08.646 EOF 00:23:08.646 )") 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.646 { 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme$subsystem", 00:23:08.646 "trtype": "$TEST_TRANSPORT", 00:23:08.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "$NVMF_PORT", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.646 "hdgst": ${hdgst:-false}, 00:23:08.646 "ddgst": ${ddgst:-false} 00:23:08.646 }, 00:23:08.646 "method": "bdev_nvme_attach_controller" 00:23:08.646 } 00:23:08.646 EOF 00:23:08.646 )") 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.646 { 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme$subsystem", 00:23:08.646 "trtype": "$TEST_TRANSPORT", 00:23:08.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "$NVMF_PORT", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.646 "hdgst": ${hdgst:-false}, 00:23:08.646 "ddgst": ${ddgst:-false} 00:23:08.646 }, 00:23:08.646 "method": "bdev_nvme_attach_controller" 00:23:08.646 } 00:23:08.646 EOF 00:23:08.646 )") 00:23:08.646 [2024-11-29 13:07:08.344706] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:23:08.646 [2024-11-29 13:07:08.344754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053421 ] 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.646 { 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme$subsystem", 00:23:08.646 "trtype": "$TEST_TRANSPORT", 00:23:08.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "$NVMF_PORT", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.646 "hdgst": ${hdgst:-false}, 00:23:08.646 "ddgst": ${ddgst:-false} 00:23:08.646 }, 00:23:08.646 "method": "bdev_nvme_attach_controller" 00:23:08.646 } 00:23:08.646 EOF 00:23:08.646 )") 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.646 { 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme$subsystem", 00:23:08.646 "trtype": "$TEST_TRANSPORT", 00:23:08.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "$NVMF_PORT", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.646 "hdgst": ${hdgst:-false}, 00:23:08.646 "ddgst": ${ddgst:-false} 00:23:08.646 }, 00:23:08.646 "method": "bdev_nvme_attach_controller" 00:23:08.646 } 00:23:08.646 EOF 00:23:08.646 )") 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:08.646 { 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme$subsystem", 00:23:08.646 "trtype": "$TEST_TRANSPORT", 00:23:08.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "$NVMF_PORT", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.646 "hdgst": ${hdgst:-false}, 00:23:08.646 "ddgst": ${ddgst:-false} 00:23:08.646 }, 00:23:08.646 "method": "bdev_nvme_attach_controller" 00:23:08.646 } 00:23:08.646 EOF 00:23:08.646 )") 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:08.646 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme1", 00:23:08.646 "trtype": "tcp", 00:23:08.646 "traddr": "10.0.0.2", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "4420", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.646 "hdgst": false, 00:23:08.646 "ddgst": false 00:23:08.646 }, 00:23:08.646 "method": "bdev_nvme_attach_controller" 00:23:08.646 },{ 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme2", 00:23:08.646 "trtype": "tcp", 00:23:08.646 "traddr": "10.0.0.2", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "4420", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:08.646 "hdgst": false, 00:23:08.646 "ddgst": false 00:23:08.646 }, 00:23:08.646 "method": "bdev_nvme_attach_controller" 00:23:08.646 },{ 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme3", 00:23:08.646 "trtype": "tcp", 00:23:08.646 "traddr": "10.0.0.2", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "4420", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:08.646 "hdgst": false, 00:23:08.646 "ddgst": false 00:23:08.646 }, 00:23:08.646 "method": "bdev_nvme_attach_controller" 00:23:08.646 },{ 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme4", 00:23:08.646 "trtype": "tcp", 00:23:08.646 "traddr": "10.0.0.2", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "4420", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:08.646 "hdgst": false, 00:23:08.646 "ddgst": false 00:23:08.646 }, 00:23:08.646 "method": "bdev_nvme_attach_controller" 00:23:08.646 },{ 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme5", 00:23:08.646 "trtype": "tcp", 00:23:08.646 "traddr": "10.0.0.2", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "4420", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:08.646 "hdgst": false, 00:23:08.646 "ddgst": false 00:23:08.646 }, 00:23:08.646 "method": "bdev_nvme_attach_controller" 00:23:08.646 },{ 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme6", 00:23:08.646 "trtype": "tcp", 00:23:08.646 "traddr": "10.0.0.2", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "4420", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:08.646 "hdgst": false, 00:23:08.646 "ddgst": false 00:23:08.646 }, 00:23:08.646 "method": "bdev_nvme_attach_controller" 00:23:08.646 },{ 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme7", 00:23:08.646 "trtype": "tcp", 00:23:08.646 "traddr": "10.0.0.2", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "4420", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:08.646 "hdgst": false, 00:23:08.646 "ddgst": false 00:23:08.646 }, 00:23:08.646 "method": "bdev_nvme_attach_controller" 00:23:08.646 },{ 00:23:08.646 "params": { 00:23:08.646 "name": "Nvme8", 00:23:08.646 "trtype": "tcp", 00:23:08.646 "traddr": "10.0.0.2", 00:23:08.646 "adrfam": "ipv4", 00:23:08.646 "trsvcid": "4420", 00:23:08.646 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:08.646 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:08.646 "hdgst": false, 00:23:08.647 "ddgst": false 00:23:08.647 }, 00:23:08.647 "method": "bdev_nvme_attach_controller" 00:23:08.647 },{ 00:23:08.647 "params": { 00:23:08.647 "name": "Nvme9", 00:23:08.647 "trtype": "tcp", 00:23:08.647 "traddr": "10.0.0.2", 00:23:08.647 "adrfam": "ipv4", 00:23:08.647 "trsvcid": "4420", 00:23:08.647 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:08.647 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:08.647 "hdgst": false, 00:23:08.647 "ddgst": false 00:23:08.647 }, 00:23:08.647 "method": "bdev_nvme_attach_controller" 00:23:08.647 },{ 00:23:08.647 "params": { 00:23:08.647 "name": "Nvme10", 00:23:08.647 "trtype": "tcp", 00:23:08.647 "traddr": "10.0.0.2", 00:23:08.647 "adrfam": "ipv4", 00:23:08.647 "trsvcid": "4420", 00:23:08.647 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:08.647 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:08.647 "hdgst": false, 00:23:08.647 "ddgst": false 00:23:08.647 }, 00:23:08.647 "method": "bdev_nvme_attach_controller" 00:23:08.647 }' 00:23:08.647 [2024-11-29 13:07:08.409804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.647 [2024-11-29 13:07:08.450900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.551 Running I/O for 10 seconds... 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:10.552 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:10.811 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.811 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=25 00:23:10.811 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 25 -ge 100 ']' 00:23:10.811 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2053280 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2053280 ']' 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2053280 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2053280 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2053280' 00:23:11.089 killing process with pid 2053280 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2053280 00:23:11.089 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2053280 00:23:11.089 [2024-11-29 13:07:10.738377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.089 [2024-11-29 13:07:10.738560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.738830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384850 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.739952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.739987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.739996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.090 [2024-11-29 13:07:10.740232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.740387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387400 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.091 [2024-11-29 13:07:10.741806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.741812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.741818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.741825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.741831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.741837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.741843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.741849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.741855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.741862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384d20 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23851f0 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.092 [2024-11-29 13:07:10.743545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.092 [2024-11-29 13:07:10.743555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.092 [2024-11-29 13:07:10.743562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.092 [2024-11-29 13:07:10.743570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.092 [2024-11-29 13:07:10.743577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.092 [2024-11-29 13:07:10.743584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.092 [2024-11-29 13:07:10.743591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.092 [2024-11-29 13:07:10.743598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdad30 is same with the state(6) to be set 00:23:11.092 [2024-11-29 13:07:10.743636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.092 [2024-11-29 13:07:10.743645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.093 [2024-11-29 13:07:10.743652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.093 [2024-11-29 13:07:10.743659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.093 [2024-11-29 13:07:10.743667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.093 [2024-11-29 13:07:10.743673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.093 [2024-11-29 13:07:10.743680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.093 [2024-11-29 13:07:10.743687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.093 [2024-11-29 13:07:10.743693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1153100 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.743746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.093 [2024-11-29 13:07:10.743755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.093 [2024-11-29 13:07:10.743762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.093 [2024-11-29 13:07:10.743769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.093 [2024-11-29 13:07:10.743776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.093 [2024-11-29 13:07:10.743783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.093 [2024-11-29 13:07:10.743794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.093 [2024-11-29 13:07:10.743801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.093 [2024-11-29 13:07:10.743807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccf200 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.743837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.093 [2024-11-29 13:07:10.743845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.093 [2024-11-29 13:07:10.743853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.093 [2024-11-29 13:07:10.743859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.093 [2024-11-29 13:07:10.743867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.093 [2024-11-29 13:07:10.743874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.093 [2024-11-29 13:07:10.743882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.093 [2024-11-29 13:07:10.743888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.093 [2024-11-29 13:07:10.743895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdb1c0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.093 [2024-11-29 13:07:10.745639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.745646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.745652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.745658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.745664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.745669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.745675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.745681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.745687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23856e0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.094 [2024-11-29 13:07:10.746727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.746733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.746740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.746746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.746752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.746758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.746764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.746771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385bb0 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.748916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386a40 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.748937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386a40 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.748945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386a40 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.748963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386a40 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.749915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2386f10 is same with the state(6) to be set 00:23:11.095 [2024-11-29 13:07:10.763400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.095 [2024-11-29 13:07:10.763437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.095 [2024-11-29 13:07:10.763453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.095 [2024-11-29 13:07:10.763462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.095 [2024-11-29 13:07:10.763470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.095 [2024-11-29 13:07:10.763478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.095 [2024-11-29 13:07:10.763486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.763991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.763998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.764006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.764012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.096 [2024-11-29 13:07:10.764020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.096 [2024-11-29 13:07:10.764027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.097 [2024-11-29 13:07:10.764396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:11.097 [2024-11-29 13:07:10.764653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdad30 (9): Bad file descriptor 00:23:11.097 [2024-11-29 13:07:10.764696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.097 [2024-11-29 13:07:10.764706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.097 [2024-11-29 13:07:10.764721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.097 [2024-11-29 13:07:10.764734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.097 [2024-11-29 13:07:10.764754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113b820 is same with the state(6) to be set 00:23:11.097 [2024-11-29 13:07:10.764773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1153100 (9): Bad file descriptor 00:23:11.097 [2024-11-29 13:07:10.764804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.097 [2024-11-29 13:07:10.764813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.097 [2024-11-29 13:07:10.764827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.097 [2024-11-29 13:07:10.764841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.097 [2024-11-29 13:07:10.764857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd8c70 is same with the state(6) to be set 00:23:11.097 [2024-11-29 13:07:10.764889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.097 [2024-11-29 13:07:10.764897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.097 [2024-11-29 13:07:10.764911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.097 [2024-11-29 13:07:10.764925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.097 [2024-11-29 13:07:10.764933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.097 [2024-11-29 13:07:10.764939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.764946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef610 is same with the state(6) to be set 00:23:11.098 [2024-11-29 13:07:10.764978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.098 [2024-11-29 13:07:10.764986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.764993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.098 [2024-11-29 13:07:10.765000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.098 [2024-11-29 13:07:10.765016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.098 [2024-11-29 13:07:10.765030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1106240 is same with the state(6) to be set 00:23:11.098 [2024-11-29 13:07:10.765062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.098 [2024-11-29 13:07:10.765070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.098 [2024-11-29 13:07:10.765085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.098 [2024-11-29 13:07:10.765098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.098 [2024-11-29 13:07:10.765112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1106cf0 is same with the state(6) to be set 00:23:11.098 [2024-11-29 13:07:10.765133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccf200 (9): Bad file descriptor 00:23:11.098 [2024-11-29 13:07:10.765158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.098 [2024-11-29 13:07:10.765167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.098 [2024-11-29 13:07:10.765181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.098 [2024-11-29 13:07:10.765195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.098 [2024-11-29 13:07:10.765209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1154350 is same with the state(6) to be set 00:23:11.098 [2024-11-29 13:07:10.765231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdb1c0 (9): Bad file descriptor 00:23:11.098 [2024-11-29 13:07:10.765615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.098 [2024-11-29 13:07:10.765894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.098 [2024-11-29 13:07:10.765901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.765910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.765917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.765927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.765934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.765943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.765957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.765966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.765973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.765981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.765988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.765996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.099 [2024-11-29 13:07:10.766495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.099 [2024-11-29 13:07:10.766504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.766990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.766999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.767006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.767016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.767025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.772581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.772593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.772604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.772610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.772618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.772625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.772634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.772641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.772650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.772658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.772667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.772674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.772682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.772689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.772697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.772704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.100 [2024-11-29 13:07:10.772713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.100 [2024-11-29 13:07:10.772721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.772985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.772994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.101 [2024-11-29 13:07:10.773330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.101 [2024-11-29 13:07:10.773342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.773348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.773357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.773364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.774429] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.102 [2024-11-29 13:07:10.774522] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.102 [2024-11-29 13:07:10.774570] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.102 [2024-11-29 13:07:10.774616] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.102 [2024-11-29 13:07:10.776606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:11.102 [2024-11-29 13:07:10.776637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:11.102 [2024-11-29 13:07:10.776657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd8c70 (9): Bad file descriptor 00:23:11.102 [2024-11-29 13:07:10.776670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113b820 (9): Bad file descriptor 00:23:11.102 [2024-11-29 13:07:10.776699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbef610 (9): Bad file descriptor 00:23:11.102 [2024-11-29 13:07:10.776712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1106240 (9): Bad file descriptor 00:23:11.102 [2024-11-29 13:07:10.776729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1106cf0 (9): Bad file descriptor 00:23:11.102 [2024-11-29 13:07:10.776751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1154350 (9): Bad file descriptor 00:23:11.102 [2024-11-29 13:07:10.777224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:11.102 [2024-11-29 13:07:10.777307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.102 [2024-11-29 13:07:10.777815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.102 [2024-11-29 13:07:10.777825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.777832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.777840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.777847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.777856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.777862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.777871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.777877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.777885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.777894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.777903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.777911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.777919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.777926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.777934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.777941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.777954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.777961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.777969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.777977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.777986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.777995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.103 [2024-11-29 13:07:10.778347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.103 [2024-11-29 13:07:10.778355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xedf220 is same with the state(6) to be set 00:23:11.103 [2024-11-29 13:07:10.779373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.104 [2024-11-29 13:07:10.779929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.104 [2024-11-29 13:07:10.779936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.779945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.779957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.779966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.779973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.779981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.779988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.779996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.780405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.780413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee0250 is same with the state(6) to be set 00:23:11.105 [2024-11-29 13:07:10.781421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.781433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.781445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.781456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.781465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.781471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.105 [2024-11-29 13:07:10.781480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.105 [2024-11-29 13:07:10.781487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.781990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.781997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.782007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.782014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.782024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.782030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.782039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.782045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.782054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.782063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.782073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.782081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.106 [2024-11-29 13:07:10.782090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.106 [2024-11-29 13:07:10.782097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.782443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.782451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d3bd0 is same with the state(6) to be set 00:23:11.107 [2024-11-29 13:07:10.783464] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.107 [2024-11-29 13:07:10.784045] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.107 [2024-11-29 13:07:10.784099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.107 [2024-11-29 13:07:10.784344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.107 [2024-11-29 13:07:10.784353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.108 [2024-11-29 13:07:10.784899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.108 [2024-11-29 13:07:10.784908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.784917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.784923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.784932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.784939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.784956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.784963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.784972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.784978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.784988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.784995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.785004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.785011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.785019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.785026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.785035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.785043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.785052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.785058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.785066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.785073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.785082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.785090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.785098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.785105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.785115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.785121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.785131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.785139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.785147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89550 is same with the state(6) to be set 00:23:11.109 [2024-11-29 13:07:10.786122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:11.109 [2024-11-29 13:07:10.786138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:11.109 [2024-11-29 13:07:10.786150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:11.109 [2024-11-29 13:07:10.786162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:11.109 [2024-11-29 13:07:10.786473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.109 [2024-11-29 13:07:10.786488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x113b820 with addr=10.0.0.2, port=4420 00:23:11.109 [2024-11-29 13:07:10.786497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113b820 is same with the state(6) to be set 00:23:11.109 [2024-11-29 13:07:10.786651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.109 [2024-11-29 13:07:10.786663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd8c70 with addr=10.0.0.2, port=4420 00:23:11.109 [2024-11-29 13:07:10.786671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd8c70 is same with the state(6) to be set 00:23:11.109 [2024-11-29 13:07:10.786774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.109 [2024-11-29 13:07:10.786785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbef610 with addr=10.0.0.2, port=4420 00:23:11.109 [2024-11-29 13:07:10.786793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef610 is same with the state(6) to be set 00:23:11.109 [2024-11-29 13:07:10.786853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.786863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.786875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.786884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.786894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.786901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.786910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.786917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.786926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.786938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.786953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.786960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.786969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.786976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.786986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.786993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.787002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.787009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.787017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.787025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.787035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.787042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.787051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.787058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.787067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.787074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.787084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.787090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.787099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.787105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.787115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.787122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.787131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.787139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.787149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.787155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.787164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.109 [2024-11-29 13:07:10.787172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.109 [2024-11-29 13:07:10.787182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.110 [2024-11-29 13:07:10.787800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.110 [2024-11-29 13:07:10.787809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.787815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.787824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.787830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.787839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.787846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.787856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.787863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.787871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.787878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.787886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.787894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.787903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e0c30 is same with the state(6) to be set 00:23:11.111 [2024-11-29 13:07:10.788023] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:11.111 [2024-11-29 13:07:10.788259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.111 [2024-11-29 13:07:10.788272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdb1c0 with addr=10.0.0.2, port=4420 00:23:11.111 [2024-11-29 13:07:10.788280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdb1c0 is same with the state(6) to be set 00:23:11.111 [2024-11-29 13:07:10.788370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.111 [2024-11-29 13:07:10.788381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xccf200 with addr=10.0.0.2, port=4420 00:23:11.111 [2024-11-29 13:07:10.788389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccf200 is same with the state(6) to be set 00:23:11.111 [2024-11-29 13:07:10.788475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.111 [2024-11-29 13:07:10.788485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdad30 with addr=10.0.0.2, port=4420 00:23:11.111 [2024-11-29 13:07:10.788492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdad30 is same with the state(6) to be set 00:23:11.111 [2024-11-29 13:07:10.788593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.111 [2024-11-29 13:07:10.788604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1153100 with addr=10.0.0.2, port=4420 00:23:11.111 [2024-11-29 13:07:10.788612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1153100 is same with the state(6) to be set 00:23:11.111 [2024-11-29 13:07:10.788622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113b820 (9): Bad file descriptor 00:23:11.111 [2024-11-29 13:07:10.788632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd8c70 (9): Bad file descriptor 00:23:11.111 [2024-11-29 13:07:10.788642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbef610 (9): Bad file descriptor 00:23:11.111 [2024-11-29 13:07:10.788667] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:11.111 [2024-11-29 13:07:10.788688] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:11.111 [2024-11-29 13:07:10.788699] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:11.111 [2024-11-29 13:07:10.788709] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:11.111 [2024-11-29 13:07:10.790572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:11.111 [2024-11-29 13:07:10.790609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdb1c0 (9): Bad file descriptor 00:23:11.111 [2024-11-29 13:07:10.790621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccf200 (9): Bad file descriptor 00:23:11.111 [2024-11-29 13:07:10.790630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdad30 (9): Bad file descriptor 00:23:11.111 [2024-11-29 13:07:10.790639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1153100 (9): Bad file descriptor 00:23:11.111 [2024-11-29 13:07:10.790647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:11.111 [2024-11-29 13:07:10.790655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:11.111 [2024-11-29 13:07:10.790665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:11.111 [2024-11-29 13:07:10.790675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:11.111 [2024-11-29 13:07:10.790683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:11.111 [2024-11-29 13:07:10.790689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:11.111 [2024-11-29 13:07:10.790695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:11.111 [2024-11-29 13:07:10.790701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:11.111 [2024-11-29 13:07:10.790709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:11.111 [2024-11-29 13:07:10.790716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:11.111 [2024-11-29 13:07:10.790726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:11.111 [2024-11-29 13:07:10.790733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:11.111 [2024-11-29 13:07:10.790820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.790830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.790843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.790850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.790860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.790867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.790875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.790882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.790893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.790900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.790909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.790915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.790924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.790932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.790941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.790954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.790962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.790969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.790977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.790985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.790995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.791002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.791010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.791021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.791030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.791037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.791046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.791053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.791063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.791070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.791078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.791085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.111 [2024-11-29 13:07:10.791093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.111 [2024-11-29 13:07:10.791101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.112 [2024-11-29 13:07:10.791689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.112 [2024-11-29 13:07:10.791698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.791706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.791713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.791721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.791729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.791738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.791745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.791753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.791760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.791769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.791776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.791785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.791792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.791801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.791808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.791816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.791828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.791838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.791845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.791853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.791860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.791868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e1ef0 is same with the state(6) to be set 00:23:11.113 [2024-11-29 13:07:10.792874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.792886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.792898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.792907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.792917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.792924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.792933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.792940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.792956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.792964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.792974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.792980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.792989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.792995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.113 [2024-11-29 13:07:10.793268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.113 [2024-11-29 13:07:10.793274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.114 [2024-11-29 13:07:10.793908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.114 [2024-11-29 13:07:10.793916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.115 [2024-11-29 13:07:10.793924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.115 [2024-11-29 13:07:10.793931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.115 [2024-11-29 13:07:10.793939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf88330 is same with the state(6) to be set 00:23:11.115 [2024-11-29 13:07:10.794928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:11.115 task offset: 17024 on job bdev=Nvme8n1 fails 00:23:11.115 00:23:11.115 Latency(us) 00:23:11.115 [2024-11-29T12:07:10.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.115 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.115 Job: Nvme1n1 ended in about 0.65 seconds with error 00:23:11.115 Verification LBA range: start 0x0 length 0x400 00:23:11.115 Nvme1n1 : 0.65 198.43 12.40 99.21 0.00 212040.13 15956.59 222480.47 00:23:11.115 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.115 Job: Nvme2n1 ended in about 0.65 seconds with error 00:23:11.115 Verification LBA range: start 0x0 length 0x400 00:23:11.115 Nvme2n1 : 0.65 197.80 12.36 98.90 0.00 207378.70 29177.77 187831.87 00:23:11.115 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.115 Job: Nvme3n1 ended in about 0.65 seconds with error 00:23:11.115 Verification LBA range: start 0x0 length 0x400 00:23:11.115 Nvme3n1 : 0.65 203.34 12.71 98.59 0.00 198579.31 16868.40 186920.07 00:23:11.115 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.115 Job: Nvme4n1 ended in about 0.66 seconds with error 00:23:11.115 Verification LBA range: start 0x0 length 0x400 00:23:11.115 Nvme4n1 : 0.66 195.10 12.19 97.55 0.00 199739.96 14588.88 206067.98 00:23:11.115 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.115 Job: Nvme5n1 ended in about 0.66 seconds with error 00:23:11.115 Verification LBA range: start 0x0 length 0x400 00:23:11.115 Nvme5n1 : 0.66 194.37 12.15 97.18 0.00 195243.85 16982.37 249834.63 00:23:11.115 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.115 Job: Nvme6n1 ended in about 0.64 seconds with error 00:23:11.115 Verification LBA range: start 0x0 length 0x400 00:23:11.115 Nvme6n1 : 0.64 199.55 12.47 99.78 0.00 184197.71 16868.40 216097.84 00:23:11.115 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.115 Job: Nvme7n1 ended in about 0.64 seconds with error 00:23:11.115 Verification LBA range: start 0x0 length 0x400 00:23:11.115 Nvme7n1 : 0.64 199.28 12.46 99.64 0.00 179189.46 12765.27 223392.28 00:23:11.115 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.115 Job: Nvme8n1 ended in about 0.64 seconds with error 00:23:11.115 Verification LBA range: start 0x0 length 0x400 00:23:11.115 Nvme8n1 : 0.64 199.96 12.50 99.98 0.00 173241.36 10257.81 206979.78 00:23:11.115 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.115 Job: Nvme9n1 ended in about 0.66 seconds with error 00:23:11.115 Verification LBA range: start 0x0 length 0x400 00:23:11.115 Nvme9n1 : 0.66 96.88 6.05 96.88 0.00 262408.46 33052.94 221568.67 00:23:11.115 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:11.115 Job: Nvme10n1 ended in about 0.65 seconds with error 00:23:11.115 Verification LBA range: start 0x0 length 0x400 00:23:11.115 Nvme10n1 : 0.65 98.18 6.14 98.18 0.00 250248.68 17552.25 238892.97 00:23:11.115 [2024-11-29T12:07:10.935Z] =================================================================================================================== 00:23:11.115 [2024-11-29T12:07:10.935Z] Total : 1782.90 111.43 985.90 0.00 202639.00 10257.81 249834.63 00:23:11.115 [2024-11-29 13:07:10.824181] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:11.115 [2024-11-29 13:07:10.824231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:11.115 [2024-11-29 13:07:10.824499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.115 [2024-11-29 13:07:10.824516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1106cf0 with addr=10.0.0.2, port=4420 00:23:11.115 [2024-11-29 13:07:10.824528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1106cf0 is same with the state(6) to be set 00:23:11.115 [2024-11-29 13:07:10.824538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:11.115 [2024-11-29 13:07:10.824546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:11.115 [2024-11-29 13:07:10.824555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:11.115 [2024-11-29 13:07:10.824564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:11.115 [2024-11-29 13:07:10.824575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:11.115 [2024-11-29 13:07:10.824582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:11.115 [2024-11-29 13:07:10.824588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:11.115 [2024-11-29 13:07:10.824595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:11.115 [2024-11-29 13:07:10.824602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:11.115 [2024-11-29 13:07:10.824609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:11.115 [2024-11-29 13:07:10.824617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:11.115 [2024-11-29 13:07:10.824623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:11.115 [2024-11-29 13:07:10.824629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:11.115 [2024-11-29 13:07:10.824636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:11.115 [2024-11-29 13:07:10.824644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:11.115 [2024-11-29 13:07:10.824650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:11.115 [2024-11-29 13:07:10.825117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.115 [2024-11-29 13:07:10.825133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1106240 with addr=10.0.0.2, port=4420 00:23:11.115 [2024-11-29 13:07:10.825148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1106240 is same with the state(6) to be set 00:23:11.115 [2024-11-29 13:07:10.825353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.115 [2024-11-29 13:07:10.825364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1154350 with addr=10.0.0.2, port=4420 00:23:11.115 [2024-11-29 13:07:10.825371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1154350 is same with the state(6) to be set 00:23:11.115 [2024-11-29 13:07:10.825385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1106cf0 (9): Bad file descriptor 00:23:11.115 [2024-11-29 13:07:10.825451] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:11.115 [2024-11-29 13:07:10.825976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1106240 (9): Bad file descriptor 00:23:11.115 [2024-11-29 13:07:10.825990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1154350 (9): Bad file descriptor 00:23:11.115 [2024-11-29 13:07:10.825999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:11.115 [2024-11-29 13:07:10.826006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:11.115 [2024-11-29 13:07:10.826013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:11.115 [2024-11-29 13:07:10.826020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:11.115 [2024-11-29 13:07:10.826069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:11.115 [2024-11-29 13:07:10.826082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:11.115 [2024-11-29 13:07:10.826090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:11.115 [2024-11-29 13:07:10.826099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:11.115 [2024-11-29 13:07:10.826108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:11.115 [2024-11-29 13:07:10.826116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:11.115 [2024-11-29 13:07:10.826125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:11.115 [2024-11-29 13:07:10.826170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:11.115 [2024-11-29 13:07:10.826178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:11.115 [2024-11-29 13:07:10.826184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:11.115 [2024-11-29 13:07:10.826191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:11.115 [2024-11-29 13:07:10.826199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:11.115 [2024-11-29 13:07:10.826205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:11.115 [2024-11-29 13:07:10.826211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:11.115 [2024-11-29 13:07:10.826218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:11.115 [2024-11-29 13:07:10.826412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.115 [2024-11-29 13:07:10.826426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1153100 with addr=10.0.0.2, port=4420 00:23:11.115 [2024-11-29 13:07:10.826434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1153100 is same with the state(6) to be set 00:23:11.115 [2024-11-29 13:07:10.826670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.115 [2024-11-29 13:07:10.826681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdad30 with addr=10.0.0.2, port=4420 00:23:11.116 [2024-11-29 13:07:10.826689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdad30 is same with the state(6) to be set 00:23:11.116 [2024-11-29 13:07:10.826762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.116 [2024-11-29 13:07:10.826771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xccf200 with addr=10.0.0.2, port=4420 00:23:11.116 [2024-11-29 13:07:10.826778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccf200 is same with the state(6) to be set 00:23:11.116 [2024-11-29 13:07:10.826921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.116 [2024-11-29 13:07:10.826931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcdb1c0 with addr=10.0.0.2, port=4420 00:23:11.116 [2024-11-29 13:07:10.826938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdb1c0 is same with the state(6) to be set 00:23:11.116 [2024-11-29 13:07:10.827093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.116 [2024-11-29 13:07:10.827105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbef610 with addr=10.0.0.2, port=4420 00:23:11.116 [2024-11-29 13:07:10.827112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef610 is same with the state(6) to be set 00:23:11.116 [2024-11-29 13:07:10.827237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.116 [2024-11-29 13:07:10.827248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd8c70 with addr=10.0.0.2, port=4420 00:23:11.116 [2024-11-29 13:07:10.827254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd8c70 is same with the state(6) to be set 00:23:11.116 [2024-11-29 13:07:10.827455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.116 [2024-11-29 13:07:10.827467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x113b820 with addr=10.0.0.2, port=4420 00:23:11.116 [2024-11-29 13:07:10.827474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113b820 is same with the state(6) to be set 00:23:11.116 [2024-11-29 13:07:10.827504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1153100 (9): Bad file descriptor 00:23:11.116 [2024-11-29 13:07:10.827514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdad30 (9): Bad file descriptor 00:23:11.116 [2024-11-29 13:07:10.827523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccf200 (9): Bad file descriptor 00:23:11.116 [2024-11-29 13:07:10.827531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdb1c0 (9): Bad file descriptor 00:23:11.116 [2024-11-29 13:07:10.827540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbef610 (9): Bad file descriptor 00:23:11.116 [2024-11-29 13:07:10.827550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd8c70 (9): Bad file descriptor 00:23:11.116 [2024-11-29 13:07:10.827558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113b820 (9): Bad file descriptor 00:23:11.116 [2024-11-29 13:07:10.827581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:11.116 [2024-11-29 13:07:10.827588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:11.116 [2024-11-29 13:07:10.827597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:11.116 [2024-11-29 13:07:10.827604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:11.116 [2024-11-29 13:07:10.827614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:11.116 [2024-11-29 13:07:10.827621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:11.116 [2024-11-29 13:07:10.827627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:11.116 [2024-11-29 13:07:10.827633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:11.116 [2024-11-29 13:07:10.827641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:11.116 [2024-11-29 13:07:10.827648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:11.116 [2024-11-29 13:07:10.827654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:11.116 [2024-11-29 13:07:10.827660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:11.116 [2024-11-29 13:07:10.827668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:11.116 [2024-11-29 13:07:10.827674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:11.116 [2024-11-29 13:07:10.827680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:11.116 [2024-11-29 13:07:10.827687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:11.116 [2024-11-29 13:07:10.827695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:11.116 [2024-11-29 13:07:10.827701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:11.116 [2024-11-29 13:07:10.827707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:11.116 [2024-11-29 13:07:10.827714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:11.116 [2024-11-29 13:07:10.827721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:11.116 [2024-11-29 13:07:10.827727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:11.116 [2024-11-29 13:07:10.827733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:11.116 [2024-11-29 13:07:10.827740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:11.116 [2024-11-29 13:07:10.827746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:11.116 [2024-11-29 13:07:10.827753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:11.116 [2024-11-29 13:07:10.827762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:11.116 [2024-11-29 13:07:10.827768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:11.376 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:12.755 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2053421 00:23:12.755 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2053421 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2053421 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.756 rmmod nvme_tcp 00:23:12.756 rmmod nvme_fabrics 00:23:12.756 rmmod nvme_keyring 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2053280 ']' 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2053280 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2053280 ']' 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2053280 00:23:12.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2053280) - No such process 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2053280 is not found' 00:23:12.756 Process with pid 2053280 is not found 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.756 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.660 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.660 00:23:14.660 real 0m7.096s 00:23:14.660 user 0m16.572s 00:23:14.660 sys 0m1.222s 00:23:14.660 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.660 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.660 ************************************ 00:23:14.660 END TEST nvmf_shutdown_tc3 00:23:14.660 ************************************ 00:23:14.660 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:14.660 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:14.661 ************************************ 00:23:14.661 START TEST nvmf_shutdown_tc4 00:23:14.661 ************************************ 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:14.661 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:14.661 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:14.661 Found net devices under 0000:86:00.0: cvl_0_0 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:14.661 Found net devices under 0000:86:00.1: cvl_0_1 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.661 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:14.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:23:14.920 00:23:14.920 --- 10.0.0.2 ping statistics --- 00:23:14.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.920 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:23:14.920 00:23:14.920 --- 10.0.0.1 ping statistics --- 00:23:14.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.920 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2054680 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2054680 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2054680 ']' 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.920 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:14.920 [2024-11-29 13:07:14.736049] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:23:14.920 [2024-11-29 13:07:14.736101] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.179 [2024-11-29 13:07:14.802399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:15.179 [2024-11-29 13:07:14.845738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.179 [2024-11-29 13:07:14.845776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.179 [2024-11-29 13:07:14.845783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.179 [2024-11-29 13:07:14.845790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.179 [2024-11-29 13:07:14.845795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.179 [2024-11-29 13:07:14.847369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.179 [2024-11-29 13:07:14.847455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.179 [2024-11-29 13:07:14.847584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.179 [2024-11-29 13:07:14.847585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.179 [2024-11-29 13:07:14.985248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.179 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.437 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.437 Malloc1 00:23:15.437 [2024-11-29 13:07:15.092903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.437 Malloc2 00:23:15.437 Malloc3 00:23:15.437 Malloc4 00:23:15.437 Malloc5 00:23:15.696 Malloc6 00:23:15.696 Malloc7 00:23:15.696 Malloc8 00:23:15.696 Malloc9 00:23:15.696 Malloc10 00:23:15.696 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.696 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:15.696 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.696 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:15.955 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2054737 00:23:15.955 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:15.955 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:15.955 [2024-11-29 13:07:15.584096] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:21.239 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.239 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2054680 00:23:21.239 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2054680 ']' 00:23:21.239 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2054680 00:23:21.239 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:21.239 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.239 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2054680 00:23:21.239 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:21.239 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:21.239 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2054680' 00:23:21.239 killing process with pid 2054680 00:23:21.239 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2054680 00:23:21.239 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2054680 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 [2024-11-29 13:07:20.596830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfc020 is same with the state(6) to be set 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 [2024-11-29 13:07:20.597120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 starting I/O failed: -6 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.239 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 [2024-11-29 13:07:20.598047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 [2024-11-29 13:07:20.599095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.240 Write completed with error (sct=0, sc=8) 00:23:21.240 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 [2024-11-29 13:07:20.600683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.241 NVMe io qpair process completion error 00:23:21.241 [2024-11-29 13:07:20.604993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b6b0 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b6b0 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b6b0 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b6b0 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b6b0 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b6b0 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b6b0 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b6b0 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b6b0 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1bb80 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1bb80 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1bb80 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1bb80 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1bb80 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1bb80 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1bb80 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.605570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1bb80 is same with the state(6) to be set 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 [2024-11-29 13:07:20.606006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c070 is same with the state(6) to be set 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 [2024-11-29 13:07:20.606033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c070 is same with tstarting I/O failed: -6 00:23:21.241 he state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.606042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c070 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.606050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c070 is same with the state(6) to be set 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 [2024-11-29 13:07:20.606056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c070 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.606063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c070 is same with the state(6) to be set 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 [2024-11-29 13:07:20.606484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b1e0 is same with the state(6) to be set 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 [2024-11-29 13:07:20.606508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b1e0 is same with the state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.606517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b1e0 is same with tWrite completed with error (sct=0, sc=8) 00:23:21.241 he state(6) to be set 00:23:21.241 [2024-11-29 13:07:20.606525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b1e0 is same with the state(6) to be set 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 [2024-11-29 13:07:20.606532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b1e0 is same with the state(6) to be set 00:23:21.241 starting I/O failed: -6 00:23:21.241 [2024-11-29 13:07:20.606539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b1e0 is same with the state(6) to be set 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 [2024-11-29 13:07:20.606597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.241 starting I/O failed: -6 00:23:21.241 starting I/O failed: -6 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 Write completed with error (sct=0, sc=8) 00:23:21.241 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 [2024-11-29 13:07:20.607540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 [2024-11-29 13:07:20.608569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.242 Write completed with error (sct=0, sc=8) 00:23:21.242 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 [2024-11-29 13:07:20.610225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.243 NVMe io qpair process completion error 00:23:21.243 [2024-11-29 13:07:20.610980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1e8b0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1e8b0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1e8b0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1e8b0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1e8b0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1e8b0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1e8b0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1e8b0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1e8b0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1e8b0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1e8b0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1e8b0 is same with the state(6) to be set 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 [2024-11-29 13:07:20.611505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c8e0 is same with the state(6) to be set 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 [2024-11-29 13:07:20.611523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c8e0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c8e0 is same with the state(6) to be set 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 [2024-11-29 13:07:20.611537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c8e0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c8e0 is same with the state(6) to be set 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 [2024-11-29 13:07:20.611552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c8e0 is same with the state(6) to be set 00:23:21.243 starting I/O failed: -6 00:23:21.243 [2024-11-29 13:07:20.611558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c8e0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c8e0 is same with the state(6) to be set 00:23:21.243 [2024-11-29 13:07:20.611571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c8e0 is same with tWrite completed with error (sct=0, sc=8) 00:23:21.243 he state(6) to be set 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 [2024-11-29 13:07:20.611755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 [2024-11-29 13:07:20.612542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.243 Write completed with error (sct=0, sc=8) 00:23:21.243 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 [2024-11-29 13:07:20.613570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.244 NVMe io qpair process completion error 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 [2024-11-29 13:07:20.614670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 [2024-11-29 13:07:20.615457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.244 starting I/O failed: -6 00:23:21.244 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 [2024-11-29 13:07:20.616503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 [2024-11-29 13:07:20.618153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.245 NVMe io qpair process completion error 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.245 starting I/O failed: -6 00:23:21.245 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 [2024-11-29 13:07:20.619174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 [2024-11-29 13:07:20.620046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 [2024-11-29 13:07:20.621083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.246 starting I/O failed: -6 00:23:21.246 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 [2024-11-29 13:07:20.623231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.247 NVMe io qpair process completion error 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 [2024-11-29 13:07:20.624250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.247 starting I/O failed: -6 00:23:21.247 starting I/O failed: -6 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 [2024-11-29 13:07:20.625168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.247 starting I/O failed: -6 00:23:21.247 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 [2024-11-29 13:07:20.626251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.248 starting I/O failed: -6 00:23:21.248 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 [2024-11-29 13:07:20.630565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.249 NVMe io qpair process completion error 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 [2024-11-29 13:07:20.631617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 [2024-11-29 13:07:20.632531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.249 Write completed with error (sct=0, sc=8) 00:23:21.249 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 [2024-11-29 13:07:20.633545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 [2024-11-29 13:07:20.635384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.250 NVMe io qpair process completion error 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 Write completed with error (sct=0, sc=8) 00:23:21.250 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 [2024-11-29 13:07:20.636403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.251 starting I/O failed: -6 00:23:21.251 starting I/O failed: -6 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 [2024-11-29 13:07:20.637384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 [2024-11-29 13:07:20.638429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.251 Write completed with error (sct=0, sc=8) 00:23:21.251 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 [2024-11-29 13:07:20.640246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.252 NVMe io qpair process completion error 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 [2024-11-29 13:07:20.641275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 starting I/O failed: -6 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.252 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 [2024-11-29 13:07:20.642191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 [2024-11-29 13:07:20.643189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.253 starting I/O failed: -6 00:23:21.253 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 [2024-11-29 13:07:20.648893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.254 NVMe io qpair process completion error 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 [2024-11-29 13:07:20.649976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 [2024-11-29 13:07:20.650904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.254 starting I/O failed: -6 00:23:21.254 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 [2024-11-29 13:07:20.651903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 starting I/O failed: -6 00:23:21.255 [2024-11-29 13:07:20.656299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:21.255 NVMe io qpair process completion error 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.255 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Write completed with error (sct=0, sc=8) 00:23:21.256 Initializing NVMe Controllers 00:23:21.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:21.256 Controller IO queue size 128, less than required. 00:23:21.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:21.256 Controller IO queue size 128, less than required. 00:23:21.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:21.256 Controller IO queue size 128, less than required. 00:23:21.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:21.256 Controller IO queue size 128, less than required. 00:23:21.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:21.256 Controller IO queue size 128, less than required. 00:23:21.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:21.256 Controller IO queue size 128, less than required. 00:23:21.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:21.256 Controller IO queue size 128, less than required. 00:23:21.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:21.256 Controller IO queue size 128, less than required. 00:23:21.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:21.256 Controller IO queue size 128, less than required. 00:23:21.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:21.256 Controller IO queue size 128, less than required. 00:23:21.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:21.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:21.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:21.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:21.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:21.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:21.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:21.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:21.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:21.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:21.256 Initialization complete. Launching workers. 00:23:21.256 ======================================================== 00:23:21.256 Latency(us) 00:23:21.256 Device Information : IOPS MiB/s Average min max 00:23:21.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2099.27 90.20 60409.82 955.00 99536.84 00:23:21.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2137.60 91.85 59840.11 957.49 140729.34 00:23:21.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2127.65 91.42 60182.67 764.86 118685.81 00:23:21.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2137.18 91.83 59319.04 702.30 111000.42 00:23:21.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2141.62 92.02 59477.36 702.35 121235.96 00:23:21.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2155.17 92.61 58829.81 801.66 101031.60 00:23:21.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2150.09 92.39 58979.35 909.05 108172.89 00:23:21.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2158.35 92.74 58772.90 693.75 108298.87 00:23:21.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2119.39 91.07 59898.58 747.82 110472.18 00:23:21.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2164.49 93.01 58664.27 668.80 108038.22 00:23:21.256 ======================================================== 00:23:21.256 Total : 21390.81 919.14 59432.54 668.80 140729.34 00:23:21.256 00:23:21.256 [2024-11-29 13:07:20.661441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043720 is same with the state(6) to be set 00:23:21.256 [2024-11-29 13:07:20.661488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041bc0 is same with the state(6) to be set 00:23:21.256 [2024-11-29 13:07:20.661519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2042a70 is same with the state(6) to be set 00:23:21.256 [2024-11-29 13:07:20.661548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043ae0 is same with the state(6) to be set 00:23:21.256 [2024-11-29 13:07:20.661576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2042740 is same with the state(6) to be set 00:23:21.256 [2024-11-29 13:07:20.661605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041560 is same with the state(6) to be set 00:23:21.256 [2024-11-29 13:07:20.661633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043900 is same with the state(6) to be set 00:23:21.256 [2024-11-29 13:07:20.661661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041890 is same with the state(6) to be set 00:23:21.256 [2024-11-29 13:07:20.661689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2041ef0 is same with the state(6) to be set 00:23:21.256 [2024-11-29 13:07:20.661717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2042410 is same with the state(6) to be set 00:23:21.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:21.256 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2054737 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2054737 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2054737 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:22.195 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:22.195 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:22.195 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:22.195 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:22.195 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:22.195 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:22.195 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:22.195 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:22.195 rmmod nvme_tcp 00:23:22.454 rmmod nvme_fabrics 00:23:22.454 rmmod nvme_keyring 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2054680 ']' 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2054680 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2054680 ']' 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2054680 00:23:22.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2054680) - No such process 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2054680 is not found' 00:23:22.454 Process with pid 2054680 is not found 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.454 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.455 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.455 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.455 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.362 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.362 00:23:24.362 real 0m9.790s 00:23:24.362 user 0m24.856s 00:23:24.362 sys 0m5.291s 00:23:24.362 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.362 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:24.362 ************************************ 00:23:24.362 END TEST nvmf_shutdown_tc4 00:23:24.362 ************************************ 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:24.622 00:23:24.622 real 0m39.500s 00:23:24.622 user 1m36.900s 00:23:24.622 sys 0m13.597s 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:24.622 ************************************ 00:23:24.622 END TEST nvmf_shutdown 00:23:24.622 ************************************ 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:24.622 ************************************ 00:23:24.622 START TEST nvmf_nsid 00:23:24.622 ************************************ 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:24.622 * Looking for test storage... 00:23:24.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.622 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:24.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.882 --rc genhtml_branch_coverage=1 00:23:24.882 --rc genhtml_function_coverage=1 00:23:24.882 --rc genhtml_legend=1 00:23:24.882 --rc geninfo_all_blocks=1 00:23:24.882 --rc geninfo_unexecuted_blocks=1 00:23:24.882 00:23:24.882 ' 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:24.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.882 --rc genhtml_branch_coverage=1 00:23:24.882 --rc genhtml_function_coverage=1 00:23:24.882 --rc genhtml_legend=1 00:23:24.882 --rc geninfo_all_blocks=1 00:23:24.882 --rc geninfo_unexecuted_blocks=1 00:23:24.882 00:23:24.882 ' 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:24.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.882 --rc genhtml_branch_coverage=1 00:23:24.882 --rc genhtml_function_coverage=1 00:23:24.882 --rc genhtml_legend=1 00:23:24.882 --rc geninfo_all_blocks=1 00:23:24.882 --rc geninfo_unexecuted_blocks=1 00:23:24.882 00:23:24.882 ' 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:24.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.882 --rc genhtml_branch_coverage=1 00:23:24.882 --rc genhtml_function_coverage=1 00:23:24.882 --rc genhtml_legend=1 00:23:24.882 --rc geninfo_all_blocks=1 00:23:24.882 --rc geninfo_unexecuted_blocks=1 00:23:24.882 00:23:24.882 ' 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.882 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:24.883 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:30.152 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.152 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:30.152 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:30.152 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:30.152 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:30.152 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:30.152 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:30.152 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:30.152 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:30.153 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:30.153 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:30.153 Found net devices under 0000:86:00.0: cvl_0_0 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:30.153 Found net devices under 0000:86:00.1: cvl_0_1 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:30.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:23:30.153 00:23:30.153 --- 10.0.0.2 ping statistics --- 00:23:30.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.153 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:23:30.153 00:23:30.153 --- 10.0.0.1 ping statistics --- 00:23:30.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.153 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2059189 00:23:30.153 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2059189 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2059189 ']' 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:30.154 [2024-11-29 13:07:29.451190] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:23:30.154 [2024-11-29 13:07:29.451233] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.154 [2024-11-29 13:07:29.516996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.154 [2024-11-29 13:07:29.558292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.154 [2024-11-29 13:07:29.558328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.154 [2024-11-29 13:07:29.558335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.154 [2024-11-29 13:07:29.558344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.154 [2024-11-29 13:07:29.558349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.154 [2024-11-29 13:07:29.558905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2059210 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=9fbf154c-d01c-4eb8-936c-abaaea1c18dd 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=c041597a-d412-4087-81a5-3bf906b26825 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e59719fc-776a-46cf-8e80-743644744a8f 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:30.154 null0 00:23:30.154 null1 00:23:30.154 [2024-11-29 13:07:29.737498] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:23:30.154 [2024-11-29 13:07:29.737543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2059210 ] 00:23:30.154 null2 00:23:30.154 [2024-11-29 13:07:29.744184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.154 [2024-11-29 13:07:29.768380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.154 [2024-11-29 13:07:29.799306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2059210 /var/tmp/tgt2.sock 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2059210 ']' 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:30.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.154 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:30.154 [2024-11-29 13:07:29.840766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.413 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.413 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:30.413 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:30.672 [2024-11-29 13:07:30.381783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.672 [2024-11-29 13:07:30.397896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:30.672 nvme0n1 nvme0n2 00:23:30.672 nvme1n1 00:23:30.672 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:30.672 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:30.672 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:32.050 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:32.050 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:32.050 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:32.050 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:32.050 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:32.050 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:32.050 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:32.050 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:32.050 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:32.050 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:32.050 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:32.050 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:32.050 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 9fbf154c-d01c-4eb8-936c-abaaea1c18dd 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9fbf154cd01c4eb8936cabaaea1c18dd 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9FBF154CD01C4EB8936CABAAEA1C18DD 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 9FBF154CD01C4EB8936CABAAEA1C18DD == \9\F\B\F\1\5\4\C\D\0\1\C\4\E\B\8\9\3\6\C\A\B\A\A\E\A\1\C\1\8\D\D ]] 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid c041597a-d412-4087-81a5-3bf906b26825 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c041597ad412408781a53bf906b26825 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C041597AD412408781A53BF906B26825 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ C041597AD412408781A53BF906B26825 == \C\0\4\1\5\9\7\A\D\4\1\2\4\0\8\7\8\1\A\5\3\B\F\9\0\6\B\2\6\8\2\5 ]] 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e59719fc-776a-46cf-8e80-743644744a8f 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e59719fc776a46cf8e80743644744a8f 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E59719FC776A46CF8E80743644744A8F 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E59719FC776A46CF8E80743644744A8F == \E\5\9\7\1\9\F\C\7\7\6\A\4\6\C\F\8\E\8\0\7\4\3\6\4\4\7\4\4\A\8\F ]] 00:23:32.984 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:33.242 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:33.242 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:33.242 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2059210 00:23:33.242 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2059210 ']' 00:23:33.242 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2059210 00:23:33.242 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:33.242 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.242 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2059210 00:23:33.242 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:33.242 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:33.242 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2059210' 00:23:33.242 killing process with pid 2059210 00:23:33.242 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2059210 00:23:33.242 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2059210 00:23:33.500 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:33.500 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:33.500 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:33.500 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.500 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:33.500 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.500 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.500 rmmod nvme_tcp 00:23:33.500 rmmod nvme_fabrics 00:23:33.758 rmmod nvme_keyring 00:23:33.758 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.758 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:33.758 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:33.758 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2059189 ']' 00:23:33.758 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2059189 00:23:33.758 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2059189 ']' 00:23:33.758 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2059189 00:23:33.758 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:33.758 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.758 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2059189 00:23:33.758 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.758 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2059189' 00:23:33.759 killing process with pid 2059189 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2059189 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2059189 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.759 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.293 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:36.293 00:23:36.293 real 0m11.365s 00:23:36.293 user 0m9.214s 00:23:36.293 sys 0m4.666s 00:23:36.293 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.293 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:36.293 ************************************ 00:23:36.293 END TEST nvmf_nsid 00:23:36.293 ************************************ 00:23:36.293 13:07:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:36.293 00:23:36.293 real 11m40.554s 00:23:36.293 user 25m25.933s 00:23:36.293 sys 3m27.814s 00:23:36.293 13:07:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.293 13:07:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:36.293 ************************************ 00:23:36.293 END TEST nvmf_target_extra 00:23:36.293 ************************************ 00:23:36.293 13:07:35 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:36.293 13:07:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:36.293 13:07:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.293 13:07:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:36.293 ************************************ 00:23:36.293 START TEST nvmf_host 00:23:36.293 ************************************ 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:36.293 * Looking for test storage... 00:23:36.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:36.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.293 --rc genhtml_branch_coverage=1 00:23:36.293 --rc genhtml_function_coverage=1 00:23:36.293 --rc genhtml_legend=1 00:23:36.293 --rc geninfo_all_blocks=1 00:23:36.293 --rc geninfo_unexecuted_blocks=1 00:23:36.293 00:23:36.293 ' 00:23:36.293 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:36.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.293 --rc genhtml_branch_coverage=1 00:23:36.293 --rc genhtml_function_coverage=1 00:23:36.293 --rc genhtml_legend=1 00:23:36.293 --rc geninfo_all_blocks=1 00:23:36.293 --rc geninfo_unexecuted_blocks=1 00:23:36.293 00:23:36.293 ' 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:36.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.294 --rc genhtml_branch_coverage=1 00:23:36.294 --rc genhtml_function_coverage=1 00:23:36.294 --rc genhtml_legend=1 00:23:36.294 --rc geninfo_all_blocks=1 00:23:36.294 --rc geninfo_unexecuted_blocks=1 00:23:36.294 00:23:36.294 ' 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:36.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.294 --rc genhtml_branch_coverage=1 00:23:36.294 --rc genhtml_function_coverage=1 00:23:36.294 --rc genhtml_legend=1 00:23:36.294 --rc geninfo_all_blocks=1 00:23:36.294 --rc geninfo_unexecuted_blocks=1 00:23:36.294 00:23:36.294 ' 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:36.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.294 ************************************ 00:23:36.294 START TEST nvmf_multicontroller 00:23:36.294 ************************************ 00:23:36.294 13:07:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:36.294 * Looking for test storage... 00:23:36.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:36.294 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:36.294 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:23:36.294 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.554 --rc genhtml_branch_coverage=1 00:23:36.554 --rc genhtml_function_coverage=1 00:23:36.554 --rc genhtml_legend=1 00:23:36.554 --rc geninfo_all_blocks=1 00:23:36.554 --rc geninfo_unexecuted_blocks=1 00:23:36.554 00:23:36.554 ' 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.554 --rc genhtml_branch_coverage=1 00:23:36.554 --rc genhtml_function_coverage=1 00:23:36.554 --rc genhtml_legend=1 00:23:36.554 --rc geninfo_all_blocks=1 00:23:36.554 --rc geninfo_unexecuted_blocks=1 00:23:36.554 00:23:36.554 ' 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.554 --rc genhtml_branch_coverage=1 00:23:36.554 --rc genhtml_function_coverage=1 00:23:36.554 --rc genhtml_legend=1 00:23:36.554 --rc geninfo_all_blocks=1 00:23:36.554 --rc geninfo_unexecuted_blocks=1 00:23:36.554 00:23:36.554 ' 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:36.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.554 --rc genhtml_branch_coverage=1 00:23:36.554 --rc genhtml_function_coverage=1 00:23:36.554 --rc genhtml_legend=1 00:23:36.554 --rc geninfo_all_blocks=1 00:23:36.554 --rc geninfo_unexecuted_blocks=1 00:23:36.554 00:23:36.554 ' 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:36.554 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:36.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:36.555 13:07:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:41.816 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:41.816 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:41.816 Found net devices under 0000:86:00.0: cvl_0_0 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:41.816 Found net devices under 0000:86:00.1: cvl_0_1 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:41.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:23:41.816 00:23:41.816 --- 10.0.0.2 ping statistics --- 00:23:41.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.816 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:23:41.816 00:23:41.816 --- 10.0.0.1 ping statistics --- 00:23:41.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.816 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2063294 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2063294 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2063294 ']' 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.816 13:07:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 [2024-11-29 13:07:41.040637] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:23:41.816 [2024-11-29 13:07:41.040686] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.816 [2024-11-29 13:07:41.109490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:41.816 [2024-11-29 13:07:41.152186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.816 [2024-11-29 13:07:41.152224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.816 [2024-11-29 13:07:41.152231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.816 [2024-11-29 13:07:41.152238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.816 [2024-11-29 13:07:41.152243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.816 [2024-11-29 13:07:41.153696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.816 [2024-11-29 13:07:41.153759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.816 [2024-11-29 13:07:41.153760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 [2024-11-29 13:07:41.296065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 Malloc0 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 [2024-11-29 13:07:41.365890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.816 [2024-11-29 13:07:41.373824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.816 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.817 Malloc1 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2063437 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2063437 /var/tmp/bdevperf.sock 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2063437 ']' 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.817 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.075 NVMe0n1 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.075 1 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.075 request: 00:23:42.075 { 00:23:42.075 "name": "NVMe0", 00:23:42.075 "trtype": "tcp", 00:23:42.075 "traddr": "10.0.0.2", 00:23:42.075 "adrfam": "ipv4", 00:23:42.075 "trsvcid": "4420", 00:23:42.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.075 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:42.075 "hostaddr": "10.0.0.1", 00:23:42.075 "prchk_reftag": false, 00:23:42.075 "prchk_guard": false, 00:23:42.075 "hdgst": false, 00:23:42.075 "ddgst": false, 00:23:42.075 "allow_unrecognized_csi": false, 00:23:42.075 "method": "bdev_nvme_attach_controller", 00:23:42.075 "req_id": 1 00:23:42.075 } 00:23:42.075 Got JSON-RPC error response 00:23:42.075 response: 00:23:42.075 { 00:23:42.075 "code": -114, 00:23:42.075 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:42.075 } 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.075 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.076 request: 00:23:42.076 { 00:23:42.076 "name": "NVMe0", 00:23:42.076 "trtype": "tcp", 00:23:42.076 "traddr": "10.0.0.2", 00:23:42.076 "adrfam": "ipv4", 00:23:42.076 "trsvcid": "4420", 00:23:42.076 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:42.076 "hostaddr": "10.0.0.1", 00:23:42.076 "prchk_reftag": false, 00:23:42.076 "prchk_guard": false, 00:23:42.076 "hdgst": false, 00:23:42.076 "ddgst": false, 00:23:42.076 "allow_unrecognized_csi": false, 00:23:42.076 "method": "bdev_nvme_attach_controller", 00:23:42.076 "req_id": 1 00:23:42.076 } 00:23:42.076 Got JSON-RPC error response 00:23:42.076 response: 00:23:42.076 { 00:23:42.076 "code": -114, 00:23:42.076 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:42.076 } 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.076 request: 00:23:42.076 { 00:23:42.076 "name": "NVMe0", 00:23:42.076 "trtype": "tcp", 00:23:42.076 "traddr": "10.0.0.2", 00:23:42.076 "adrfam": "ipv4", 00:23:42.076 "trsvcid": "4420", 00:23:42.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.076 "hostaddr": "10.0.0.1", 00:23:42.076 "prchk_reftag": false, 00:23:42.076 "prchk_guard": false, 00:23:42.076 "hdgst": false, 00:23:42.076 "ddgst": false, 00:23:42.076 "multipath": "disable", 00:23:42.076 "allow_unrecognized_csi": false, 00:23:42.076 "method": "bdev_nvme_attach_controller", 00:23:42.076 "req_id": 1 00:23:42.076 } 00:23:42.076 Got JSON-RPC error response 00:23:42.076 response: 00:23:42.076 { 00:23:42.076 "code": -114, 00:23:42.076 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:42.076 } 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.076 request: 00:23:42.076 { 00:23:42.076 "name": "NVMe0", 00:23:42.076 "trtype": "tcp", 00:23:42.076 "traddr": "10.0.0.2", 00:23:42.076 "adrfam": "ipv4", 00:23:42.076 "trsvcid": "4420", 00:23:42.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.076 "hostaddr": "10.0.0.1", 00:23:42.076 "prchk_reftag": false, 00:23:42.076 "prchk_guard": false, 00:23:42.076 "hdgst": false, 00:23:42.076 "ddgst": false, 00:23:42.076 "multipath": "failover", 00:23:42.076 "allow_unrecognized_csi": false, 00:23:42.076 "method": "bdev_nvme_attach_controller", 00:23:42.076 "req_id": 1 00:23:42.076 } 00:23:42.076 Got JSON-RPC error response 00:23:42.076 response: 00:23:42.076 { 00:23:42.076 "code": -114, 00:23:42.076 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:42.076 } 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.076 13:07:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.334 NVMe0n1 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.334 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:42.334 13:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:43.709 { 00:23:43.709 "results": [ 00:23:43.709 { 00:23:43.709 "job": "NVMe0n1", 00:23:43.709 "core_mask": "0x1", 00:23:43.709 "workload": "write", 00:23:43.709 "status": "finished", 00:23:43.709 "queue_depth": 128, 00:23:43.709 "io_size": 4096, 00:23:43.709 "runtime": 1.007436, 00:23:43.709 "iops": 22653.548215469767, 00:23:43.709 "mibps": 88.49042271667878, 00:23:43.709 "io_failed": 0, 00:23:43.709 "io_timeout": 0, 00:23:43.709 "avg_latency_us": 5631.654235691723, 00:23:43.709 "min_latency_us": 4274.086956521739, 00:23:43.709 "max_latency_us": 15158.761739130436 00:23:43.709 } 00:23:43.709 ], 00:23:43.709 "core_count": 1 00:23:43.709 } 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2063437 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2063437 ']' 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2063437 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2063437 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:43.709 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2063437' 00:23:43.709 killing process with pid 2063437 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2063437 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2063437 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:43.710 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:43.984 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:43.984 [2024-11-29 13:07:41.478782] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:23:43.984 [2024-11-29 13:07:41.478835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2063437 ] 00:23:43.984 [2024-11-29 13:07:41.544070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.984 [2024-11-29 13:07:41.587475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.984 [2024-11-29 13:07:42.118951] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 08faeeab-89a9-4b27-854d-8cd7be3de8dd already exists 00:23:43.984 [2024-11-29 13:07:42.118978] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:08faeeab-89a9-4b27-854d-8cd7be3de8dd alias for bdev NVMe1n1 00:23:43.984 [2024-11-29 13:07:42.118986] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:43.984 Running I/O for 1 seconds... 00:23:43.984 22646.00 IOPS, 88.46 MiB/s 00:23:43.984 Latency(us) 00:23:43.984 [2024-11-29T12:07:43.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.984 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:43.984 NVMe0n1 : 1.01 22653.55 88.49 0.00 0.00 5631.65 4274.09 15158.76 00:23:43.984 [2024-11-29T12:07:43.804Z] =================================================================================================================== 00:23:43.984 [2024-11-29T12:07:43.804Z] Total : 22653.55 88.49 0.00 0.00 5631.65 4274.09 15158.76 00:23:43.984 Received shutdown signal, test time was about 1.000000 seconds 00:23:43.984 00:23:43.984 Latency(us) 00:23:43.984 [2024-11-29T12:07:43.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.984 [2024-11-29T12:07:43.804Z] =================================================================================================================== 00:23:43.984 [2024-11-29T12:07:43.804Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:43.984 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.984 rmmod nvme_tcp 00:23:43.984 rmmod nvme_fabrics 00:23:43.984 rmmod nvme_keyring 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2063294 ']' 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2063294 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2063294 ']' 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2063294 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2063294 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2063294' 00:23:43.984 killing process with pid 2063294 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2063294 00:23:43.984 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2063294 00:23:44.278 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.278 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.278 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.278 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:44.278 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.278 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:44.278 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.278 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.278 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:44.278 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.278 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.278 13:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.221 13:07:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:46.221 00:23:46.221 real 0m9.958s 00:23:46.221 user 0m11.509s 00:23:46.221 sys 0m4.354s 00:23:46.221 13:07:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.221 13:07:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.221 ************************************ 00:23:46.221 END TEST nvmf_multicontroller 00:23:46.221 ************************************ 00:23:46.221 13:07:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:46.221 13:07:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:46.221 13:07:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.222 13:07:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.222 ************************************ 00:23:46.222 START TEST nvmf_aer 00:23:46.222 ************************************ 00:23:46.222 13:07:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:46.481 * Looking for test storage... 00:23:46.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.481 --rc genhtml_branch_coverage=1 00:23:46.481 --rc genhtml_function_coverage=1 00:23:46.481 --rc genhtml_legend=1 00:23:46.481 --rc geninfo_all_blocks=1 00:23:46.481 --rc geninfo_unexecuted_blocks=1 00:23:46.481 00:23:46.481 ' 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.481 --rc genhtml_branch_coverage=1 00:23:46.481 --rc genhtml_function_coverage=1 00:23:46.481 --rc genhtml_legend=1 00:23:46.481 --rc geninfo_all_blocks=1 00:23:46.481 --rc geninfo_unexecuted_blocks=1 00:23:46.481 00:23:46.481 ' 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.481 --rc genhtml_branch_coverage=1 00:23:46.481 --rc genhtml_function_coverage=1 00:23:46.481 --rc genhtml_legend=1 00:23:46.481 --rc geninfo_all_blocks=1 00:23:46.481 --rc geninfo_unexecuted_blocks=1 00:23:46.481 00:23:46.481 ' 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:46.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.481 --rc genhtml_branch_coverage=1 00:23:46.481 --rc genhtml_function_coverage=1 00:23:46.481 --rc genhtml_legend=1 00:23:46.481 --rc geninfo_all_blocks=1 00:23:46.481 --rc geninfo_unexecuted_blocks=1 00:23:46.481 00:23:46.481 ' 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:46.481 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.482 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.482 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.482 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:46.482 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:46.482 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.482 13:07:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:51.746 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:51.746 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:51.746 Found net devices under 0000:86:00.0: cvl_0_0 00:23:51.746 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:51.747 Found net devices under 0000:86:00.1: cvl_0_1 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:51.747 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:52.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:23:52.006 00:23:52.006 --- 10.0.0.2 ping statistics --- 00:23:52.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.006 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:23:52.006 00:23:52.006 --- 10.0.0.1 ping statistics --- 00:23:52.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.006 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2067317 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2067317 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2067317 ']' 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.006 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.006 [2024-11-29 13:07:51.726354] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:23:52.006 [2024-11-29 13:07:51.726396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.007 [2024-11-29 13:07:51.792453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.266 [2024-11-29 13:07:51.836439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.266 [2024-11-29 13:07:51.836475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.266 [2024-11-29 13:07:51.836482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.266 [2024-11-29 13:07:51.836488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.266 [2024-11-29 13:07:51.836493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.266 [2024-11-29 13:07:51.837979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.266 [2024-11-29 13:07:51.838079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.266 [2024-11-29 13:07:51.838185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.266 [2024-11-29 13:07:51.838187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.266 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.266 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:52.266 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.266 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.266 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.266 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.266 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:52.266 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.266 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.266 [2024-11-29 13:07:51.976294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.266 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.266 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:52.266 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.266 13:07:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.267 Malloc0 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.267 [2024-11-29 13:07:52.036798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.267 [ 00:23:52.267 { 00:23:52.267 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:52.267 "subtype": "Discovery", 00:23:52.267 "listen_addresses": [], 00:23:52.267 "allow_any_host": true, 00:23:52.267 "hosts": [] 00:23:52.267 }, 00:23:52.267 { 00:23:52.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.267 "subtype": "NVMe", 00:23:52.267 "listen_addresses": [ 00:23:52.267 { 00:23:52.267 "trtype": "TCP", 00:23:52.267 "adrfam": "IPv4", 00:23:52.267 "traddr": "10.0.0.2", 00:23:52.267 "trsvcid": "4420" 00:23:52.267 } 00:23:52.267 ], 00:23:52.267 "allow_any_host": true, 00:23:52.267 "hosts": [], 00:23:52.267 "serial_number": "SPDK00000000000001", 00:23:52.267 "model_number": "SPDK bdev Controller", 00:23:52.267 "max_namespaces": 2, 00:23:52.267 "min_cntlid": 1, 00:23:52.267 "max_cntlid": 65519, 00:23:52.267 "namespaces": [ 00:23:52.267 { 00:23:52.267 "nsid": 1, 00:23:52.267 "bdev_name": "Malloc0", 00:23:52.267 "name": "Malloc0", 00:23:52.267 "nguid": "42BE103F0F3948398D6B63CEE33B8C75", 00:23:52.267 "uuid": "42be103f-0f39-4839-8d6b-63cee33b8c75" 00:23:52.267 } 00:23:52.267 ] 00:23:52.267 } 00:23:52.267 ] 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2067340 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:52.267 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:52.525 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:52.525 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:52.525 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:52.525 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:52.525 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:52.525 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:52.525 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:52.525 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:52.525 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.526 Malloc1 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.526 [ 00:23:52.526 { 00:23:52.526 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:52.526 "subtype": "Discovery", 00:23:52.526 "listen_addresses": [], 00:23:52.526 "allow_any_host": true, 00:23:52.526 "hosts": [] 00:23:52.526 }, 00:23:52.526 { 00:23:52.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.526 "subtype": "NVMe", 00:23:52.526 "listen_addresses": [ 00:23:52.526 { 00:23:52.526 "trtype": "TCP", 00:23:52.526 "adrfam": "IPv4", 00:23:52.526 "traddr": "10.0.0.2", 00:23:52.526 "trsvcid": "4420" 00:23:52.526 } 00:23:52.526 ], 00:23:52.526 "allow_any_host": true, 00:23:52.526 "hosts": [], 00:23:52.526 "serial_number": "SPDK00000000000001", 00:23:52.526 "model_number": "SPDK bdev Controller", 00:23:52.526 "max_namespaces": 2, 00:23:52.526 "min_cntlid": 1, 00:23:52.526 "max_cntlid": 65519, 00:23:52.526 "namespaces": [ 00:23:52.526 { 00:23:52.526 "nsid": 1, 00:23:52.526 "bdev_name": "Malloc0", 00:23:52.526 "name": "Malloc0", 00:23:52.526 "nguid": "42BE103F0F3948398D6B63CEE33B8C75", 00:23:52.526 "uuid": "42be103f-0f39-4839-8d6b-63cee33b8c75" 00:23:52.526 }, 00:23:52.526 { 00:23:52.526 "nsid": 2, 00:23:52.526 "bdev_name": "Malloc1", 00:23:52.526 "name": "Malloc1", 00:23:52.526 "nguid": "3D06BA7291814CB2A46A37EF5AFE55EB", 00:23:52.526 "uuid": "3d06ba72-9181-4cb2-a46a-37ef5afe55eb" 00:23:52.526 } 00:23:52.526 ] 00:23:52.526 } 00:23:52.526 ] 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2067340 00:23:52.526 Asynchronous Event Request test 00:23:52.526 Attaching to 10.0.0.2 00:23:52.526 Attached to 10.0.0.2 00:23:52.526 Registering asynchronous event callbacks... 00:23:52.526 Starting namespace attribute notice tests for all controllers... 00:23:52.526 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:52.526 aer_cb - Changed Namespace 00:23:52.526 Cleaning up... 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.526 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.784 rmmod nvme_tcp 00:23:52.784 rmmod nvme_fabrics 00:23:52.784 rmmod nvme_keyring 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2067317 ']' 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2067317 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2067317 ']' 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2067317 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2067317 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2067317' 00:23:52.784 killing process with pid 2067317 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2067317 00:23:52.784 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2067317 00:23:53.043 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:53.043 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:53.043 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:53.043 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:53.043 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:53.043 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:53.043 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:53.043 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:53.043 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:53.043 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.043 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.043 13:07:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.943 13:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.943 00:23:54.943 real 0m8.760s 00:23:54.943 user 0m5.013s 00:23:54.943 sys 0m4.474s 00:23:54.943 13:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.943 13:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:54.943 ************************************ 00:23:54.943 END TEST nvmf_aer 00:23:54.943 ************************************ 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.201 ************************************ 00:23:55.201 START TEST nvmf_async_init 00:23:55.201 ************************************ 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:55.201 * Looking for test storage... 00:23:55.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.201 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:55.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.202 --rc genhtml_branch_coverage=1 00:23:55.202 --rc genhtml_function_coverage=1 00:23:55.202 --rc genhtml_legend=1 00:23:55.202 --rc geninfo_all_blocks=1 00:23:55.202 --rc geninfo_unexecuted_blocks=1 00:23:55.202 00:23:55.202 ' 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:55.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.202 --rc genhtml_branch_coverage=1 00:23:55.202 --rc genhtml_function_coverage=1 00:23:55.202 --rc genhtml_legend=1 00:23:55.202 --rc geninfo_all_blocks=1 00:23:55.202 --rc geninfo_unexecuted_blocks=1 00:23:55.202 00:23:55.202 ' 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:55.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.202 --rc genhtml_branch_coverage=1 00:23:55.202 --rc genhtml_function_coverage=1 00:23:55.202 --rc genhtml_legend=1 00:23:55.202 --rc geninfo_all_blocks=1 00:23:55.202 --rc geninfo_unexecuted_blocks=1 00:23:55.202 00:23:55.202 ' 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:55.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.202 --rc genhtml_branch_coverage=1 00:23:55.202 --rc genhtml_function_coverage=1 00:23:55.202 --rc genhtml_legend=1 00:23:55.202 --rc geninfo_all_blocks=1 00:23:55.202 --rc geninfo_unexecuted_blocks=1 00:23:55.202 00:23:55.202 ' 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.202 13:07:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.202 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.460 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.460 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.460 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.460 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=93b93fe9bf3343f5b55ff691cc0c317e 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:55.461 13:07:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:00.720 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:00.720 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:00.720 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:00.721 Found net devices under 0000:86:00.0: cvl_0_0 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:00.721 Found net devices under 0000:86:00.1: cvl_0_1 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.721 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:00.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:24:00.979 00:24:00.979 --- 10.0.0.2 ping statistics --- 00:24:00.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.979 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:24:00.979 00:24:00.979 --- 10.0.0.1 ping statistics --- 00:24:00.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.979 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:00.979 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:01.236 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:01.236 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:01.236 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.236 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.236 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2070872 00:24:01.236 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:01.236 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2070872 00:24:01.236 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2070872 ']' 00:24:01.236 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.236 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.236 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.236 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.236 13:08:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.236 [2024-11-29 13:08:00.855682] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:24:01.236 [2024-11-29 13:08:00.855727] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.236 [2024-11-29 13:08:00.922169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.236 [2024-11-29 13:08:00.964429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.236 [2024-11-29 13:08:00.964464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.236 [2024-11-29 13:08:00.964471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.236 [2024-11-29 13:08:00.964477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.236 [2024-11-29 13:08:00.964483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.236 [2024-11-29 13:08:00.965063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.493 [2024-11-29 13:08:01.103273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.493 null0 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 93b93fe9bf3343f5b55ff691cc0c317e 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.493 [2024-11-29 13:08:01.155557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.493 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.750 nvme0n1 00:24:01.750 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.750 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:01.750 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.750 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.750 [ 00:24:01.750 { 00:24:01.750 "name": "nvme0n1", 00:24:01.750 "aliases": [ 00:24:01.750 "93b93fe9-bf33-43f5-b55f-f691cc0c317e" 00:24:01.750 ], 00:24:01.750 "product_name": "NVMe disk", 00:24:01.750 "block_size": 512, 00:24:01.750 "num_blocks": 2097152, 00:24:01.750 "uuid": "93b93fe9-bf33-43f5-b55f-f691cc0c317e", 00:24:01.750 "numa_id": 1, 00:24:01.750 "assigned_rate_limits": { 00:24:01.750 "rw_ios_per_sec": 0, 00:24:01.750 "rw_mbytes_per_sec": 0, 00:24:01.750 "r_mbytes_per_sec": 0, 00:24:01.750 "w_mbytes_per_sec": 0 00:24:01.750 }, 00:24:01.750 "claimed": false, 00:24:01.750 "zoned": false, 00:24:01.750 "supported_io_types": { 00:24:01.750 "read": true, 00:24:01.750 "write": true, 00:24:01.750 "unmap": false, 00:24:01.750 "flush": true, 00:24:01.750 "reset": true, 00:24:01.750 "nvme_admin": true, 00:24:01.750 "nvme_io": true, 00:24:01.750 "nvme_io_md": false, 00:24:01.750 "write_zeroes": true, 00:24:01.750 "zcopy": false, 00:24:01.750 "get_zone_info": false, 00:24:01.750 "zone_management": false, 00:24:01.750 "zone_append": false, 00:24:01.750 "compare": true, 00:24:01.750 "compare_and_write": true, 00:24:01.751 "abort": true, 00:24:01.751 "seek_hole": false, 00:24:01.751 "seek_data": false, 00:24:01.751 "copy": true, 00:24:01.751 "nvme_iov_md": false 00:24:01.751 }, 00:24:01.751 "memory_domains": [ 00:24:01.751 { 00:24:01.751 "dma_device_id": "system", 00:24:01.751 "dma_device_type": 1 00:24:01.751 } 00:24:01.751 ], 00:24:01.751 "driver_specific": { 00:24:01.751 "nvme": [ 00:24:01.751 { 00:24:01.751 "trid": { 00:24:01.751 "trtype": "TCP", 00:24:01.751 "adrfam": "IPv4", 00:24:01.751 "traddr": "10.0.0.2", 00:24:01.751 "trsvcid": "4420", 00:24:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:01.751 }, 00:24:01.751 "ctrlr_data": { 00:24:01.751 "cntlid": 1, 00:24:01.751 "vendor_id": "0x8086", 00:24:01.751 "model_number": "SPDK bdev Controller", 00:24:01.751 "serial_number": "00000000000000000000", 00:24:01.751 "firmware_revision": "25.01", 00:24:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.751 "oacs": { 00:24:01.751 "security": 0, 00:24:01.751 "format": 0, 00:24:01.751 "firmware": 0, 00:24:01.751 "ns_manage": 0 00:24:01.751 }, 00:24:01.751 "multi_ctrlr": true, 00:24:01.751 "ana_reporting": false 00:24:01.751 }, 00:24:01.751 "vs": { 00:24:01.751 "nvme_version": "1.3" 00:24:01.751 }, 00:24:01.751 "ns_data": { 00:24:01.751 "id": 1, 00:24:01.751 "can_share": true 00:24:01.751 } 00:24:01.751 } 00:24:01.751 ], 00:24:01.751 "mp_policy": "active_passive" 00:24:01.751 } 00:24:01.751 } 00:24:01.751 ] 00:24:01.751 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.751 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:01.751 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.751 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.751 [2024-11-29 13:08:01.416177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:01.751 [2024-11-29 13:08:01.416238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1ee20 (9): Bad file descriptor 00:24:01.751 [2024-11-29 13:08:01.548050] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:01.751 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.751 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:01.751 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.751 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.751 [ 00:24:01.751 { 00:24:01.751 "name": "nvme0n1", 00:24:01.751 "aliases": [ 00:24:01.751 "93b93fe9-bf33-43f5-b55f-f691cc0c317e" 00:24:01.751 ], 00:24:01.751 "product_name": "NVMe disk", 00:24:01.751 "block_size": 512, 00:24:01.751 "num_blocks": 2097152, 00:24:01.751 "uuid": "93b93fe9-bf33-43f5-b55f-f691cc0c317e", 00:24:01.751 "numa_id": 1, 00:24:01.751 "assigned_rate_limits": { 00:24:01.751 "rw_ios_per_sec": 0, 00:24:01.751 "rw_mbytes_per_sec": 0, 00:24:01.751 "r_mbytes_per_sec": 0, 00:24:01.751 "w_mbytes_per_sec": 0 00:24:01.751 }, 00:24:01.751 "claimed": false, 00:24:01.751 "zoned": false, 00:24:01.751 "supported_io_types": { 00:24:01.751 "read": true, 00:24:01.751 "write": true, 00:24:01.751 "unmap": false, 00:24:01.751 "flush": true, 00:24:01.751 "reset": true, 00:24:01.751 "nvme_admin": true, 00:24:01.751 "nvme_io": true, 00:24:01.751 "nvme_io_md": false, 00:24:01.751 "write_zeroes": true, 00:24:01.751 "zcopy": false, 00:24:01.751 "get_zone_info": false, 00:24:01.751 "zone_management": false, 00:24:01.751 "zone_append": false, 00:24:01.751 "compare": true, 00:24:01.751 "compare_and_write": true, 00:24:01.751 "abort": true, 00:24:01.751 "seek_hole": false, 00:24:01.751 "seek_data": false, 00:24:01.751 "copy": true, 00:24:01.751 "nvme_iov_md": false 00:24:01.751 }, 00:24:01.751 "memory_domains": [ 00:24:01.751 { 00:24:01.751 "dma_device_id": "system", 00:24:01.751 "dma_device_type": 1 00:24:01.751 } 00:24:01.751 ], 00:24:01.751 "driver_specific": { 00:24:01.751 "nvme": [ 00:24:01.751 { 00:24:01.751 "trid": { 00:24:01.751 "trtype": "TCP", 00:24:01.751 "adrfam": "IPv4", 00:24:01.751 "traddr": "10.0.0.2", 00:24:01.751 "trsvcid": "4420", 00:24:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:01.751 }, 00:24:01.751 "ctrlr_data": { 00:24:01.751 "cntlid": 2, 00:24:01.751 "vendor_id": "0x8086", 00:24:01.751 "model_number": "SPDK bdev Controller", 00:24:01.751 "serial_number": "00000000000000000000", 00:24:01.751 "firmware_revision": "25.01", 00:24:01.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.751 "oacs": { 00:24:01.751 "security": 0, 00:24:01.751 "format": 0, 00:24:01.751 "firmware": 0, 00:24:01.751 "ns_manage": 0 00:24:01.751 }, 00:24:01.751 "multi_ctrlr": true, 00:24:01.751 "ana_reporting": false 00:24:01.751 }, 00:24:01.751 "vs": { 00:24:01.751 "nvme_version": "1.3" 00:24:01.751 }, 00:24:01.751 "ns_data": { 00:24:01.751 "id": 1, 00:24:01.751 "can_share": true 00:24:01.751 } 00:24:01.751 } 00:24:01.751 ], 00:24:01.751 "mp_policy": "active_passive" 00:24:01.751 } 00:24:01.751 } 00:24:01.751 ] 00:24:01.751 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.751 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.751 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.751 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.H3T396PsBS 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.H3T396PsBS 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.H3T396PsBS 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.008 [2024-11-29 13:08:01.624794] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.008 [2024-11-29 13:08:01.624913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.008 [2024-11-29 13:08:01.644858] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.008 nvme0n1 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.008 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.008 [ 00:24:02.008 { 00:24:02.008 "name": "nvme0n1", 00:24:02.008 "aliases": [ 00:24:02.008 "93b93fe9-bf33-43f5-b55f-f691cc0c317e" 00:24:02.008 ], 00:24:02.008 "product_name": "NVMe disk", 00:24:02.008 "block_size": 512, 00:24:02.008 "num_blocks": 2097152, 00:24:02.008 "uuid": "93b93fe9-bf33-43f5-b55f-f691cc0c317e", 00:24:02.008 "numa_id": 1, 00:24:02.008 "assigned_rate_limits": { 00:24:02.008 "rw_ios_per_sec": 0, 00:24:02.008 "rw_mbytes_per_sec": 0, 00:24:02.008 "r_mbytes_per_sec": 0, 00:24:02.008 "w_mbytes_per_sec": 0 00:24:02.008 }, 00:24:02.008 "claimed": false, 00:24:02.008 "zoned": false, 00:24:02.008 "supported_io_types": { 00:24:02.008 "read": true, 00:24:02.008 "write": true, 00:24:02.008 "unmap": false, 00:24:02.008 "flush": true, 00:24:02.008 "reset": true, 00:24:02.008 "nvme_admin": true, 00:24:02.008 "nvme_io": true, 00:24:02.008 "nvme_io_md": false, 00:24:02.008 "write_zeroes": true, 00:24:02.008 "zcopy": false, 00:24:02.008 "get_zone_info": false, 00:24:02.008 "zone_management": false, 00:24:02.008 "zone_append": false, 00:24:02.008 "compare": true, 00:24:02.008 "compare_and_write": true, 00:24:02.008 "abort": true, 00:24:02.008 "seek_hole": false, 00:24:02.008 "seek_data": false, 00:24:02.008 "copy": true, 00:24:02.008 "nvme_iov_md": false 00:24:02.008 }, 00:24:02.008 "memory_domains": [ 00:24:02.008 { 00:24:02.008 "dma_device_id": "system", 00:24:02.008 "dma_device_type": 1 00:24:02.008 } 00:24:02.008 ], 00:24:02.008 "driver_specific": { 00:24:02.008 "nvme": [ 00:24:02.008 { 00:24:02.008 "trid": { 00:24:02.008 "trtype": "TCP", 00:24:02.008 "adrfam": "IPv4", 00:24:02.008 "traddr": "10.0.0.2", 00:24:02.008 "trsvcid": "4421", 00:24:02.008 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:02.008 }, 00:24:02.008 "ctrlr_data": { 00:24:02.009 "cntlid": 3, 00:24:02.009 "vendor_id": "0x8086", 00:24:02.009 "model_number": "SPDK bdev Controller", 00:24:02.009 "serial_number": "00000000000000000000", 00:24:02.009 "firmware_revision": "25.01", 00:24:02.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:02.009 "oacs": { 00:24:02.009 "security": 0, 00:24:02.009 "format": 0, 00:24:02.009 "firmware": 0, 00:24:02.009 "ns_manage": 0 00:24:02.009 }, 00:24:02.009 "multi_ctrlr": true, 00:24:02.009 "ana_reporting": false 00:24:02.009 }, 00:24:02.009 "vs": { 00:24:02.009 "nvme_version": "1.3" 00:24:02.009 }, 00:24:02.009 "ns_data": { 00:24:02.009 "id": 1, 00:24:02.009 "can_share": true 00:24:02.009 } 00:24:02.009 } 00:24:02.009 ], 00:24:02.009 "mp_policy": "active_passive" 00:24:02.009 } 00:24:02.009 } 00:24:02.009 ] 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.H3T396PsBS 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:02.009 rmmod nvme_tcp 00:24:02.009 rmmod nvme_fabrics 00:24:02.009 rmmod nvme_keyring 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2070872 ']' 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2070872 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2070872 ']' 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2070872 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.009 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2070872 00:24:02.267 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:02.267 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:02.267 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2070872' 00:24:02.267 killing process with pid 2070872 00:24:02.267 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2070872 00:24:02.267 13:08:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2070872 00:24:02.267 13:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:02.267 13:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:02.267 13:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:02.267 13:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:02.267 13:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:02.267 13:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:02.267 13:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:02.267 13:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:02.267 13:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:02.267 13:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.267 13:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.267 13:08:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:04.796 00:24:04.796 real 0m9.259s 00:24:04.796 user 0m3.046s 00:24:04.796 sys 0m4.640s 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.796 ************************************ 00:24:04.796 END TEST nvmf_async_init 00:24:04.796 ************************************ 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.796 ************************************ 00:24:04.796 START TEST dma 00:24:04.796 ************************************ 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:04.796 * Looking for test storage... 00:24:04.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:04.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.796 --rc genhtml_branch_coverage=1 00:24:04.796 --rc genhtml_function_coverage=1 00:24:04.796 --rc genhtml_legend=1 00:24:04.796 --rc geninfo_all_blocks=1 00:24:04.796 --rc geninfo_unexecuted_blocks=1 00:24:04.796 00:24:04.796 ' 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:04.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.796 --rc genhtml_branch_coverage=1 00:24:04.796 --rc genhtml_function_coverage=1 00:24:04.796 --rc genhtml_legend=1 00:24:04.796 --rc geninfo_all_blocks=1 00:24:04.796 --rc geninfo_unexecuted_blocks=1 00:24:04.796 00:24:04.796 ' 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:04.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.796 --rc genhtml_branch_coverage=1 00:24:04.796 --rc genhtml_function_coverage=1 00:24:04.796 --rc genhtml_legend=1 00:24:04.796 --rc geninfo_all_blocks=1 00:24:04.796 --rc geninfo_unexecuted_blocks=1 00:24:04.796 00:24:04.796 ' 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:04.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.796 --rc genhtml_branch_coverage=1 00:24:04.796 --rc genhtml_function_coverage=1 00:24:04.796 --rc genhtml_legend=1 00:24:04.796 --rc geninfo_all_blocks=1 00:24:04.796 --rc geninfo_unexecuted_blocks=1 00:24:04.796 00:24:04.796 ' 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.796 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:04.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:04.797 00:24:04.797 real 0m0.199s 00:24:04.797 user 0m0.117s 00:24:04.797 sys 0m0.094s 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:04.797 ************************************ 00:24:04.797 END TEST dma 00:24:04.797 ************************************ 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.797 ************************************ 00:24:04.797 START TEST nvmf_identify 00:24:04.797 ************************************ 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:04.797 * Looking for test storage... 00:24:04.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:04.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.797 --rc genhtml_branch_coverage=1 00:24:04.797 --rc genhtml_function_coverage=1 00:24:04.797 --rc genhtml_legend=1 00:24:04.797 --rc geninfo_all_blocks=1 00:24:04.797 --rc geninfo_unexecuted_blocks=1 00:24:04.797 00:24:04.797 ' 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:04.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.797 --rc genhtml_branch_coverage=1 00:24:04.797 --rc genhtml_function_coverage=1 00:24:04.797 --rc genhtml_legend=1 00:24:04.797 --rc geninfo_all_blocks=1 00:24:04.797 --rc geninfo_unexecuted_blocks=1 00:24:04.797 00:24:04.797 ' 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:04.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.797 --rc genhtml_branch_coverage=1 00:24:04.797 --rc genhtml_function_coverage=1 00:24:04.797 --rc genhtml_legend=1 00:24:04.797 --rc geninfo_all_blocks=1 00:24:04.797 --rc geninfo_unexecuted_blocks=1 00:24:04.797 00:24:04.797 ' 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:04.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.797 --rc genhtml_branch_coverage=1 00:24:04.797 --rc genhtml_function_coverage=1 00:24:04.797 --rc genhtml_legend=1 00:24:04.797 --rc geninfo_all_blocks=1 00:24:04.797 --rc geninfo_unexecuted_blocks=1 00:24:04.797 00:24:04.797 ' 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.797 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:04.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.798 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.056 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.056 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.056 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:05.056 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:05.056 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:05.056 13:08:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:10.315 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:10.315 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.315 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:10.316 Found net devices under 0000:86:00.0: cvl_0_0 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:10.316 Found net devices under 0000:86:00.1: cvl_0_1 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.316 13:08:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.316 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.316 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.316 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:10.316 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.316 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:24:10.575 00:24:10.575 --- 10.0.0.2 ping statistics --- 00:24:10.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.575 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:24:10.575 00:24:10.575 --- 10.0.0.1 ping statistics --- 00:24:10.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.575 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2074691 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2074691 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2074691 ']' 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.575 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.575 [2024-11-29 13:08:10.257562] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:24:10.575 [2024-11-29 13:08:10.257608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.575 [2024-11-29 13:08:10.325028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.575 [2024-11-29 13:08:10.368865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.575 [2024-11-29 13:08:10.368903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.575 [2024-11-29 13:08:10.368910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.575 [2024-11-29 13:08:10.368915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.575 [2024-11-29 13:08:10.368921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.575 [2024-11-29 13:08:10.370375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.575 [2024-11-29 13:08:10.370471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.575 [2024-11-29 13:08:10.370533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.575 [2024-11-29 13:08:10.370535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.834 [2024-11-29 13:08:10.473473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.834 Malloc0 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.834 [2024-11-29 13:08:10.580739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.834 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.835 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.835 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:10.835 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.835 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.835 [ 00:24:10.835 { 00:24:10.835 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:10.835 "subtype": "Discovery", 00:24:10.835 "listen_addresses": [ 00:24:10.835 { 00:24:10.835 "trtype": "TCP", 00:24:10.835 "adrfam": "IPv4", 00:24:10.835 "traddr": "10.0.0.2", 00:24:10.835 "trsvcid": "4420" 00:24:10.835 } 00:24:10.835 ], 00:24:10.835 "allow_any_host": true, 00:24:10.835 "hosts": [] 00:24:10.835 }, 00:24:10.835 { 00:24:10.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.835 "subtype": "NVMe", 00:24:10.835 "listen_addresses": [ 00:24:10.835 { 00:24:10.835 "trtype": "TCP", 00:24:10.835 "adrfam": "IPv4", 00:24:10.835 "traddr": "10.0.0.2", 00:24:10.835 "trsvcid": "4420" 00:24:10.835 } 00:24:10.835 ], 00:24:10.835 "allow_any_host": true, 00:24:10.835 "hosts": [], 00:24:10.835 "serial_number": "SPDK00000000000001", 00:24:10.835 "model_number": "SPDK bdev Controller", 00:24:10.835 "max_namespaces": 32, 00:24:10.835 "min_cntlid": 1, 00:24:10.835 "max_cntlid": 65519, 00:24:10.835 "namespaces": [ 00:24:10.835 { 00:24:10.835 "nsid": 1, 00:24:10.835 "bdev_name": "Malloc0", 00:24:10.835 "name": "Malloc0", 00:24:10.835 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:10.835 "eui64": "ABCDEF0123456789", 00:24:10.835 "uuid": "5e993518-87aa-4124-bb78-8b6b614c7133" 00:24:10.835 } 00:24:10.835 ] 00:24:10.835 } 00:24:10.835 ] 00:24:10.835 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.835 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:10.835 [2024-11-29 13:08:10.632022] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:24:10.835 [2024-11-29 13:08:10.632055] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074714 ] 00:24:11.096 [2024-11-29 13:08:10.672902] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:11.096 [2024-11-29 13:08:10.676954] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:11.096 [2024-11-29 13:08:10.676961] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:11.096 [2024-11-29 13:08:10.676976] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:11.096 [2024-11-29 13:08:10.676984] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:11.096 [2024-11-29 13:08:10.677544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:11.096 [2024-11-29 13:08:10.677582] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x24a2690 0 00:24:11.097 [2024-11-29 13:08:10.683960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:11.097 [2024-11-29 13:08:10.683974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:11.097 [2024-11-29 13:08:10.683979] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:11.097 [2024-11-29 13:08:10.683982] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:11.097 [2024-11-29 13:08:10.684017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.684023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.684027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.684039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:11.097 [2024-11-29 13:08:10.684057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504100, cid 0, qid 0 00:24:11.097 [2024-11-29 13:08:10.690957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.097 [2024-11-29 13:08:10.690966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.097 [2024-11-29 13:08:10.690969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.690973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504100) on tqpair=0x24a2690 00:24:11.097 [2024-11-29 13:08:10.690986] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:11.097 [2024-11-29 13:08:10.690993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:11.097 [2024-11-29 13:08:10.690998] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:11.097 [2024-11-29 13:08:10.691013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.691027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-29 13:08:10.691040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504100, cid 0, qid 0 00:24:11.097 [2024-11-29 13:08:10.691136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.097 [2024-11-29 13:08:10.691142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.097 [2024-11-29 13:08:10.691145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504100) on tqpair=0x24a2690 00:24:11.097 [2024-11-29 13:08:10.691156] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:11.097 [2024-11-29 13:08:10.691163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:11.097 [2024-11-29 13:08:10.691169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.691182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-29 13:08:10.691192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504100, cid 0, qid 0 00:24:11.097 [2024-11-29 13:08:10.691259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.097 [2024-11-29 13:08:10.691264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.097 [2024-11-29 13:08:10.691268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504100) on tqpair=0x24a2690 00:24:11.097 [2024-11-29 13:08:10.691276] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:11.097 [2024-11-29 13:08:10.691283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:11.097 [2024-11-29 13:08:10.691289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.691301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-29 13:08:10.691310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504100, cid 0, qid 0 00:24:11.097 [2024-11-29 13:08:10.691379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.097 [2024-11-29 13:08:10.691385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.097 [2024-11-29 13:08:10.691389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504100) on tqpair=0x24a2690 00:24:11.097 [2024-11-29 13:08:10.691396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:11.097 [2024-11-29 13:08:10.691404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.691417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-29 13:08:10.691426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504100, cid 0, qid 0 00:24:11.097 [2024-11-29 13:08:10.691492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.097 [2024-11-29 13:08:10.691498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.097 [2024-11-29 13:08:10.691501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504100) on tqpair=0x24a2690 00:24:11.097 [2024-11-29 13:08:10.691509] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:11.097 [2024-11-29 13:08:10.691514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:11.097 [2024-11-29 13:08:10.691520] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:11.097 [2024-11-29 13:08:10.691628] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:11.097 [2024-11-29 13:08:10.691633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:11.097 [2024-11-29 13:08:10.691640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.691652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-29 13:08:10.691662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504100, cid 0, qid 0 00:24:11.097 [2024-11-29 13:08:10.691727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.097 [2024-11-29 13:08:10.691732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.097 [2024-11-29 13:08:10.691735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504100) on tqpair=0x24a2690 00:24:11.097 [2024-11-29 13:08:10.691743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:11.097 [2024-11-29 13:08:10.691751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.691763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-29 13:08:10.691777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504100, cid 0, qid 0 00:24:11.097 [2024-11-29 13:08:10.691845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.097 [2024-11-29 13:08:10.691851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.097 [2024-11-29 13:08:10.691854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504100) on tqpair=0x24a2690 00:24:11.097 [2024-11-29 13:08:10.691862] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:11.097 [2024-11-29 13:08:10.691866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:11.097 [2024-11-29 13:08:10.691873] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:11.097 [2024-11-29 13:08:10.691884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:11.097 [2024-11-29 13:08:10.691893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.691896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.691902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-29 13:08:10.691913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504100, cid 0, qid 0 00:24:11.097 [2024-11-29 13:08:10.692018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:11.097 [2024-11-29 13:08:10.692025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:11.097 [2024-11-29 13:08:10.692028] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692032] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24a2690): datao=0, datal=4096, cccid=0 00:24:11.097 [2024-11-29 13:08:10.692036] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2504100) on tqpair(0x24a2690): expected_datao=0, payload_size=4096 00:24:11.097 [2024-11-29 13:08:10.692040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692046] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692051] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.097 [2024-11-29 13:08:10.692085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.097 [2024-11-29 13:08:10.692089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504100) on tqpair=0x24a2690 00:24:11.097 [2024-11-29 13:08:10.692100] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:11.097 [2024-11-29 13:08:10.692106] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:11.097 [2024-11-29 13:08:10.692110] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:11.097 [2024-11-29 13:08:10.692116] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:11.097 [2024-11-29 13:08:10.692121] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:11.097 [2024-11-29 13:08:10.692125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:11.097 [2024-11-29 13:08:10.692133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:11.097 [2024-11-29 13:08:10.692141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.692154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:11.097 [2024-11-29 13:08:10.692164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504100, cid 0, qid 0 00:24:11.097 [2024-11-29 13:08:10.692247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.097 [2024-11-29 13:08:10.692252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.097 [2024-11-29 13:08:10.692255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504100) on tqpair=0x24a2690 00:24:11.097 [2024-11-29 13:08:10.692265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.692277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.097 [2024-11-29 13:08:10.692282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.692294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.097 [2024-11-29 13:08:10.692299] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.692310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.097 [2024-11-29 13:08:10.692315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.692327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.097 [2024-11-29 13:08:10.692331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:11.097 [2024-11-29 13:08:10.692342] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:11.097 [2024-11-29 13:08:10.692347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.692356] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-29 13:08:10.692367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504100, cid 0, qid 0 00:24:11.097 [2024-11-29 13:08:10.692372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504280, cid 1, qid 0 00:24:11.097 [2024-11-29 13:08:10.692376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504400, cid 2, qid 0 00:24:11.097 [2024-11-29 13:08:10.692382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504580, cid 3, qid 0 00:24:11.097 [2024-11-29 13:08:10.692386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504700, cid 4, qid 0 00:24:11.097 [2024-11-29 13:08:10.692481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.097 [2024-11-29 13:08:10.692487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.097 [2024-11-29 13:08:10.692490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504700) on tqpair=0x24a2690 00:24:11.097 [2024-11-29 13:08:10.692498] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:11.097 [2024-11-29 13:08:10.692502] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:11.097 [2024-11-29 13:08:10.692511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24a2690) 00:24:11.097 [2024-11-29 13:08:10.692521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.097 [2024-11-29 13:08:10.692530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504700, cid 4, qid 0 00:24:11.097 [2024-11-29 13:08:10.692605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:11.097 [2024-11-29 13:08:10.692610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:11.097 [2024-11-29 13:08:10.692613] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692616] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24a2690): datao=0, datal=4096, cccid=4 00:24:11.097 [2024-11-29 13:08:10.692620] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2504700) on tqpair(0x24a2690): expected_datao=0, payload_size=4096 00:24:11.097 [2024-11-29 13:08:10.692624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692644] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692648] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.097 [2024-11-29 13:08:10.692691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.097 [2024-11-29 13:08:10.692694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504700) on tqpair=0x24a2690 00:24:11.097 [2024-11-29 13:08:10.692708] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:11.097 [2024-11-29 13:08:10.692728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.097 [2024-11-29 13:08:10.692733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24a2690) 00:24:11.098 [2024-11-29 13:08:10.692738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-29 13:08:10.692744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.692747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.692750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x24a2690) 00:24:11.098 [2024-11-29 13:08:10.692755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.098 [2024-11-29 13:08:10.692769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504700, cid 4, qid 0 00:24:11.098 [2024-11-29 13:08:10.692774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504880, cid 5, qid 0 00:24:11.098 [2024-11-29 13:08:10.692880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:11.098 [2024-11-29 13:08:10.692887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:11.098 [2024-11-29 13:08:10.692890] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.692894] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24a2690): datao=0, datal=1024, cccid=4 00:24:11.098 [2024-11-29 13:08:10.692899] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2504700) on tqpair(0x24a2690): expected_datao=0, payload_size=1024 00:24:11.098 [2024-11-29 13:08:10.692903] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.692908] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.692912] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.692916] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.098 [2024-11-29 13:08:10.692921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.098 [2024-11-29 13:08:10.692924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.692927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504880) on tqpair=0x24a2690 00:24:11.098 [2024-11-29 13:08:10.737960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.098 [2024-11-29 13:08:10.737974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.098 [2024-11-29 13:08:10.737977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.737981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504700) on tqpair=0x24a2690 00:24:11.098 [2024-11-29 13:08:10.737994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.737998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24a2690) 00:24:11.098 [2024-11-29 13:08:10.738006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-29 13:08:10.738022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504700, cid 4, qid 0 00:24:11.098 [2024-11-29 13:08:10.738180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:11.098 [2024-11-29 13:08:10.738186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:11.098 [2024-11-29 13:08:10.738189] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.738193] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24a2690): datao=0, datal=3072, cccid=4 00:24:11.098 [2024-11-29 13:08:10.738197] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2504700) on tqpair(0x24a2690): expected_datao=0, payload_size=3072 00:24:11.098 [2024-11-29 13:08:10.738201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.738214] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.738218] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.779094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.098 [2024-11-29 13:08:10.779104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.098 [2024-11-29 13:08:10.779108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.779111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504700) on tqpair=0x24a2690 00:24:11.098 [2024-11-29 13:08:10.779122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.779125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24a2690) 00:24:11.098 [2024-11-29 13:08:10.779132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-29 13:08:10.779148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504700, cid 4, qid 0 00:24:11.098 [2024-11-29 13:08:10.779242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:11.098 [2024-11-29 13:08:10.779251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:11.098 [2024-11-29 13:08:10.779254] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.779257] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24a2690): datao=0, datal=8, cccid=4 00:24:11.098 [2024-11-29 13:08:10.779262] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2504700) on tqpair(0x24a2690): expected_datao=0, payload_size=8 00:24:11.098 [2024-11-29 13:08:10.779266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.779271] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.779275] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.821108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.098 [2024-11-29 13:08:10.821118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.098 [2024-11-29 13:08:10.821121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.821125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504700) on tqpair=0x24a2690 00:24:11.098 ===================================================== 00:24:11.098 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:11.098 ===================================================== 00:24:11.098 Controller Capabilities/Features 00:24:11.098 ================================ 00:24:11.098 Vendor ID: 0000 00:24:11.098 Subsystem Vendor ID: 0000 00:24:11.098 Serial Number: .................... 00:24:11.098 Model Number: ........................................ 00:24:11.098 Firmware Version: 25.01 00:24:11.098 Recommended Arb Burst: 0 00:24:11.098 IEEE OUI Identifier: 00 00 00 00:24:11.098 Multi-path I/O 00:24:11.098 May have multiple subsystem ports: No 00:24:11.098 May have multiple controllers: No 00:24:11.098 Associated with SR-IOV VF: No 00:24:11.098 Max Data Transfer Size: 131072 00:24:11.098 Max Number of Namespaces: 0 00:24:11.098 Max Number of I/O Queues: 1024 00:24:11.098 NVMe Specification Version (VS): 1.3 00:24:11.098 NVMe Specification Version (Identify): 1.3 00:24:11.098 Maximum Queue Entries: 128 00:24:11.098 Contiguous Queues Required: Yes 00:24:11.098 Arbitration Mechanisms Supported 00:24:11.098 Weighted Round Robin: Not Supported 00:24:11.098 Vendor Specific: Not Supported 00:24:11.098 Reset Timeout: 15000 ms 00:24:11.098 Doorbell Stride: 4 bytes 00:24:11.098 NVM Subsystem Reset: Not Supported 00:24:11.098 Command Sets Supported 00:24:11.098 NVM Command Set: Supported 00:24:11.098 Boot Partition: Not Supported 00:24:11.098 Memory Page Size Minimum: 4096 bytes 00:24:11.098 Memory Page Size Maximum: 4096 bytes 00:24:11.098 Persistent Memory Region: Not Supported 00:24:11.098 Optional Asynchronous Events Supported 00:24:11.098 Namespace Attribute Notices: Not Supported 00:24:11.098 Firmware Activation Notices: Not Supported 00:24:11.098 ANA Change Notices: Not Supported 00:24:11.098 PLE Aggregate Log Change Notices: Not Supported 00:24:11.098 LBA Status Info Alert Notices: Not Supported 00:24:11.098 EGE Aggregate Log Change Notices: Not Supported 00:24:11.098 Normal NVM Subsystem Shutdown event: Not Supported 00:24:11.098 Zone Descriptor Change Notices: Not Supported 00:24:11.098 Discovery Log Change Notices: Supported 00:24:11.098 Controller Attributes 00:24:11.098 128-bit Host Identifier: Not Supported 00:24:11.098 Non-Operational Permissive Mode: Not Supported 00:24:11.098 NVM Sets: Not Supported 00:24:11.098 Read Recovery Levels: Not Supported 00:24:11.098 Endurance Groups: Not Supported 00:24:11.098 Predictable Latency Mode: Not Supported 00:24:11.098 Traffic Based Keep ALive: Not Supported 00:24:11.098 Namespace Granularity: Not Supported 00:24:11.098 SQ Associations: Not Supported 00:24:11.098 UUID List: Not Supported 00:24:11.098 Multi-Domain Subsystem: Not Supported 00:24:11.098 Fixed Capacity Management: Not Supported 00:24:11.098 Variable Capacity Management: Not Supported 00:24:11.098 Delete Endurance Group: Not Supported 00:24:11.098 Delete NVM Set: Not Supported 00:24:11.098 Extended LBA Formats Supported: Not Supported 00:24:11.098 Flexible Data Placement Supported: Not Supported 00:24:11.098 00:24:11.098 Controller Memory Buffer Support 00:24:11.098 ================================ 00:24:11.098 Supported: No 00:24:11.098 00:24:11.098 Persistent Memory Region Support 00:24:11.098 ================================ 00:24:11.098 Supported: No 00:24:11.098 00:24:11.098 Admin Command Set Attributes 00:24:11.098 ============================ 00:24:11.098 Security Send/Receive: Not Supported 00:24:11.098 Format NVM: Not Supported 00:24:11.098 Firmware Activate/Download: Not Supported 00:24:11.098 Namespace Management: Not Supported 00:24:11.098 Device Self-Test: Not Supported 00:24:11.098 Directives: Not Supported 00:24:11.098 NVMe-MI: Not Supported 00:24:11.098 Virtualization Management: Not Supported 00:24:11.098 Doorbell Buffer Config: Not Supported 00:24:11.098 Get LBA Status Capability: Not Supported 00:24:11.098 Command & Feature Lockdown Capability: Not Supported 00:24:11.098 Abort Command Limit: 1 00:24:11.098 Async Event Request Limit: 4 00:24:11.098 Number of Firmware Slots: N/A 00:24:11.098 Firmware Slot 1 Read-Only: N/A 00:24:11.098 Firmware Activation Without Reset: N/A 00:24:11.098 Multiple Update Detection Support: N/A 00:24:11.098 Firmware Update Granularity: No Information Provided 00:24:11.098 Per-Namespace SMART Log: No 00:24:11.098 Asymmetric Namespace Access Log Page: Not Supported 00:24:11.098 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:11.098 Command Effects Log Page: Not Supported 00:24:11.098 Get Log Page Extended Data: Supported 00:24:11.098 Telemetry Log Pages: Not Supported 00:24:11.098 Persistent Event Log Pages: Not Supported 00:24:11.098 Supported Log Pages Log Page: May Support 00:24:11.098 Commands Supported & Effects Log Page: Not Supported 00:24:11.098 Feature Identifiers & Effects Log Page:May Support 00:24:11.098 NVMe-MI Commands & Effects Log Page: May Support 00:24:11.098 Data Area 4 for Telemetry Log: Not Supported 00:24:11.098 Error Log Page Entries Supported: 128 00:24:11.098 Keep Alive: Not Supported 00:24:11.098 00:24:11.098 NVM Command Set Attributes 00:24:11.098 ========================== 00:24:11.098 Submission Queue Entry Size 00:24:11.098 Max: 1 00:24:11.098 Min: 1 00:24:11.098 Completion Queue Entry Size 00:24:11.098 Max: 1 00:24:11.098 Min: 1 00:24:11.098 Number of Namespaces: 0 00:24:11.098 Compare Command: Not Supported 00:24:11.098 Write Uncorrectable Command: Not Supported 00:24:11.098 Dataset Management Command: Not Supported 00:24:11.098 Write Zeroes Command: Not Supported 00:24:11.098 Set Features Save Field: Not Supported 00:24:11.098 Reservations: Not Supported 00:24:11.098 Timestamp: Not Supported 00:24:11.098 Copy: Not Supported 00:24:11.098 Volatile Write Cache: Not Present 00:24:11.098 Atomic Write Unit (Normal): 1 00:24:11.098 Atomic Write Unit (PFail): 1 00:24:11.098 Atomic Compare & Write Unit: 1 00:24:11.098 Fused Compare & Write: Supported 00:24:11.098 Scatter-Gather List 00:24:11.098 SGL Command Set: Supported 00:24:11.098 SGL Keyed: Supported 00:24:11.098 SGL Bit Bucket Descriptor: Not Supported 00:24:11.098 SGL Metadata Pointer: Not Supported 00:24:11.098 Oversized SGL: Not Supported 00:24:11.098 SGL Metadata Address: Not Supported 00:24:11.098 SGL Offset: Supported 00:24:11.098 Transport SGL Data Block: Not Supported 00:24:11.098 Replay Protected Memory Block: Not Supported 00:24:11.098 00:24:11.098 Firmware Slot Information 00:24:11.098 ========================= 00:24:11.098 Active slot: 0 00:24:11.098 00:24:11.098 00:24:11.098 Error Log 00:24:11.098 ========= 00:24:11.098 00:24:11.098 Active Namespaces 00:24:11.098 ================= 00:24:11.098 Discovery Log Page 00:24:11.098 ================== 00:24:11.098 Generation Counter: 2 00:24:11.098 Number of Records: 2 00:24:11.098 Record Format: 0 00:24:11.098 00:24:11.098 Discovery Log Entry 0 00:24:11.098 ---------------------- 00:24:11.098 Transport Type: 3 (TCP) 00:24:11.098 Address Family: 1 (IPv4) 00:24:11.098 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:11.098 Entry Flags: 00:24:11.098 Duplicate Returned Information: 1 00:24:11.098 Explicit Persistent Connection Support for Discovery: 1 00:24:11.098 Transport Requirements: 00:24:11.098 Secure Channel: Not Required 00:24:11.098 Port ID: 0 (0x0000) 00:24:11.098 Controller ID: 65535 (0xffff) 00:24:11.098 Admin Max SQ Size: 128 00:24:11.098 Transport Service Identifier: 4420 00:24:11.098 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:11.098 Transport Address: 10.0.0.2 00:24:11.098 Discovery Log Entry 1 00:24:11.098 ---------------------- 00:24:11.098 Transport Type: 3 (TCP) 00:24:11.098 Address Family: 1 (IPv4) 00:24:11.098 Subsystem Type: 2 (NVM Subsystem) 00:24:11.098 Entry Flags: 00:24:11.098 Duplicate Returned Information: 0 00:24:11.098 Explicit Persistent Connection Support for Discovery: 0 00:24:11.098 Transport Requirements: 00:24:11.098 Secure Channel: Not Required 00:24:11.098 Port ID: 0 (0x0000) 00:24:11.098 Controller ID: 65535 (0xffff) 00:24:11.098 Admin Max SQ Size: 128 00:24:11.098 Transport Service Identifier: 4420 00:24:11.098 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:11.098 Transport Address: 10.0.0.2 [2024-11-29 13:08:10.821210] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:11.098 [2024-11-29 13:08:10.821221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504100) on tqpair=0x24a2690 00:24:11.098 [2024-11-29 13:08:10.821228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.098 [2024-11-29 13:08:10.821232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504280) on tqpair=0x24a2690 00:24:11.098 [2024-11-29 13:08:10.821236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.098 [2024-11-29 13:08:10.821241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504400) on tqpair=0x24a2690 00:24:11.098 [2024-11-29 13:08:10.821245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.098 [2024-11-29 13:08:10.821249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504580) on tqpair=0x24a2690 00:24:11.098 [2024-11-29 13:08:10.821253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.098 [2024-11-29 13:08:10.821262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.821265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.821268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24a2690) 00:24:11.098 [2024-11-29 13:08:10.821275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-29 13:08:10.821290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504580, cid 3, qid 0 00:24:11.098 [2024-11-29 13:08:10.821354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.098 [2024-11-29 13:08:10.821360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.098 [2024-11-29 13:08:10.821363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.821367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504580) on tqpair=0x24a2690 00:24:11.098 [2024-11-29 13:08:10.821373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.821377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.821380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24a2690) 00:24:11.098 [2024-11-29 13:08:10.821385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.098 [2024-11-29 13:08:10.821398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504580, cid 3, qid 0 00:24:11.098 [2024-11-29 13:08:10.821472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.098 [2024-11-29 13:08:10.821480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.098 [2024-11-29 13:08:10.821484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.098 [2024-11-29 13:08:10.821487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504580) on tqpair=0x24a2690 00:24:11.099 [2024-11-29 13:08:10.821491] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:11.099 [2024-11-29 13:08:10.821496] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:11.099 [2024-11-29 13:08:10.821504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.821507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.821511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24a2690) 00:24:11.099 [2024-11-29 13:08:10.821516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-29 13:08:10.821525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504580, cid 3, qid 0 00:24:11.099 [2024-11-29 13:08:10.821593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.099 [2024-11-29 13:08:10.821599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.099 [2024-11-29 13:08:10.821603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.821606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504580) on tqpair=0x24a2690 00:24:11.099 [2024-11-29 13:08:10.821614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.821618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.821622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24a2690) 00:24:11.099 [2024-11-29 13:08:10.821627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-29 13:08:10.821636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504580, cid 3, qid 0 00:24:11.099 [2024-11-29 13:08:10.821700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.099 [2024-11-29 13:08:10.821705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.099 [2024-11-29 13:08:10.821708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.821711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504580) on tqpair=0x24a2690 00:24:11.099 [2024-11-29 13:08:10.821720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.821724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.821727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24a2690) 00:24:11.099 [2024-11-29 13:08:10.821733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-29 13:08:10.821742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504580, cid 3, qid 0 00:24:11.099 [2024-11-29 13:08:10.824953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.099 [2024-11-29 13:08:10.824961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.099 [2024-11-29 13:08:10.824964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.824967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504580) on tqpair=0x24a2690 00:24:11.099 [2024-11-29 13:08:10.824978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.824982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.824985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24a2690) 00:24:11.099 [2024-11-29 13:08:10.824991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.099 [2024-11-29 13:08:10.825004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2504580, cid 3, qid 0 00:24:11.099 [2024-11-29 13:08:10.825189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.099 [2024-11-29 13:08:10.825195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.099 [2024-11-29 13:08:10.825198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.825201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2504580) on tqpair=0x24a2690 00:24:11.099 [2024-11-29 13:08:10.825208] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 3 milliseconds 00:24:11.099 00:24:11.099 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:11.099 [2024-11-29 13:08:10.863448] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:24:11.099 [2024-11-29 13:08:10.863482] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074723 ] 00:24:11.099 [2024-11-29 13:08:10.903604] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:11.099 [2024-11-29 13:08:10.903647] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:11.099 [2024-11-29 13:08:10.903652] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:11.099 [2024-11-29 13:08:10.903666] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:11.099 [2024-11-29 13:08:10.903674] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:11.099 [2024-11-29 13:08:10.907122] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:11.099 [2024-11-29 13:08:10.907154] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xcc5690 0 00:24:11.099 [2024-11-29 13:08:10.907319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:11.099 [2024-11-29 13:08:10.907326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:11.099 [2024-11-29 13:08:10.907330] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:11.099 [2024-11-29 13:08:10.907333] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:11.099 [2024-11-29 13:08:10.907353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.907358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.099 [2024-11-29 13:08:10.907362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc5690) 00:24:11.099 [2024-11-29 13:08:10.907371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:11.099 [2024-11-29 13:08:10.907383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27100, cid 0, qid 0 00:24:11.363 [2024-11-29 13:08:10.914959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.363 [2024-11-29 13:08:10.914969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.363 [2024-11-29 13:08:10.914972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.363 [2024-11-29 13:08:10.914975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27100) on tqpair=0xcc5690 00:24:11.363 [2024-11-29 13:08:10.914986] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:11.363 [2024-11-29 13:08:10.914993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:11.363 [2024-11-29 13:08:10.915000] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:11.363 [2024-11-29 13:08:10.915011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.363 [2024-11-29 13:08:10.915015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.363 [2024-11-29 13:08:10.915018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc5690) 00:24:11.363 [2024-11-29 13:08:10.915025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.363 [2024-11-29 13:08:10.915037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27100, cid 0, qid 0 00:24:11.363 [2024-11-29 13:08:10.915202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.363 [2024-11-29 13:08:10.915208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.363 [2024-11-29 13:08:10.915210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.363 [2024-11-29 13:08:10.915214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27100) on tqpair=0xcc5690 00:24:11.363 [2024-11-29 13:08:10.915220] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:11.363 [2024-11-29 13:08:10.915227] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:11.363 [2024-11-29 13:08:10.915233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.363 [2024-11-29 13:08:10.915236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.363 [2024-11-29 13:08:10.915240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc5690) 00:24:11.363 [2024-11-29 13:08:10.915245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.363 [2024-11-29 13:08:10.915256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27100, cid 0, qid 0 00:24:11.363 [2024-11-29 13:08:10.915345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.363 [2024-11-29 13:08:10.915351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.363 [2024-11-29 13:08:10.915354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.363 [2024-11-29 13:08:10.915357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27100) on tqpair=0xcc5690 00:24:11.363 [2024-11-29 13:08:10.915362] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:11.363 [2024-11-29 13:08:10.915368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:11.363 [2024-11-29 13:08:10.915374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.363 [2024-11-29 13:08:10.915378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.363 [2024-11-29 13:08:10.915381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc5690) 00:24:11.363 [2024-11-29 13:08:10.915387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.363 [2024-11-29 13:08:10.915396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27100, cid 0, qid 0 00:24:11.363 [2024-11-29 13:08:10.915496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.363 [2024-11-29 13:08:10.915502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.363 [2024-11-29 13:08:10.915505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.363 [2024-11-29 13:08:10.915508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27100) on tqpair=0xcc5690 00:24:11.363 [2024-11-29 13:08:10.915513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:11.363 [2024-11-29 13:08:10.915521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.363 [2024-11-29 13:08:10.915527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.363 [2024-11-29 13:08:10.915530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc5690) 00:24:11.363 [2024-11-29 13:08:10.915536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.364 [2024-11-29 13:08:10.915545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27100, cid 0, qid 0 00:24:11.364 [2024-11-29 13:08:10.915648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.364 [2024-11-29 13:08:10.915653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.364 [2024-11-29 13:08:10.915656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.915660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27100) on tqpair=0xcc5690 00:24:11.364 [2024-11-29 13:08:10.915663] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:11.364 [2024-11-29 13:08:10.915667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:11.364 [2024-11-29 13:08:10.915674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:11.364 [2024-11-29 13:08:10.915782] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:11.364 [2024-11-29 13:08:10.915786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:11.364 [2024-11-29 13:08:10.915793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.915796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.915799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc5690) 00:24:11.364 [2024-11-29 13:08:10.915805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.364 [2024-11-29 13:08:10.915814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27100, cid 0, qid 0 00:24:11.364 [2024-11-29 13:08:10.915881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.364 [2024-11-29 13:08:10.915887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.364 [2024-11-29 13:08:10.915890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.915893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27100) on tqpair=0xcc5690 00:24:11.364 [2024-11-29 13:08:10.915897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:11.364 [2024-11-29 13:08:10.915905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.915909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.915912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc5690) 00:24:11.364 [2024-11-29 13:08:10.915918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.364 [2024-11-29 13:08:10.915927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27100, cid 0, qid 0 00:24:11.364 [2024-11-29 13:08:10.916032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.364 [2024-11-29 13:08:10.916038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.364 [2024-11-29 13:08:10.916041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.916045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27100) on tqpair=0xcc5690 00:24:11.364 [2024-11-29 13:08:10.916049] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:11.364 [2024-11-29 13:08:10.916057] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:11.364 [2024-11-29 13:08:10.916064] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:11.364 [2024-11-29 13:08:10.916074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:11.364 [2024-11-29 13:08:10.916081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.916085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc5690) 00:24:11.364 [2024-11-29 13:08:10.916090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.364 [2024-11-29 13:08:10.916101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27100, cid 0, qid 0 00:24:11.364 [2024-11-29 13:08:10.916205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:11.364 [2024-11-29 13:08:10.916211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:11.364 [2024-11-29 13:08:10.916214] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.916218] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc5690): datao=0, datal=4096, cccid=0 00:24:11.364 [2024-11-29 13:08:10.916222] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd27100) on tqpair(0xcc5690): expected_datao=0, payload_size=4096 00:24:11.364 [2024-11-29 13:08:10.916226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.916232] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.916236] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.916285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.364 [2024-11-29 13:08:10.916291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.364 [2024-11-29 13:08:10.916294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.916297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27100) on tqpair=0xcc5690 00:24:11.364 [2024-11-29 13:08:10.916304] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:11.364 [2024-11-29 13:08:10.916308] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:11.364 [2024-11-29 13:08:10.916312] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:11.364 [2024-11-29 13:08:10.916316] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:11.364 [2024-11-29 13:08:10.916320] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:11.364 [2024-11-29 13:08:10.916324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:11.364 [2024-11-29 13:08:10.916331] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:11.364 [2024-11-29 13:08:10.916337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.916340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.916343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc5690) 00:24:11.364 [2024-11-29 13:08:10.916349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:11.364 [2024-11-29 13:08:10.916359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27100, cid 0, qid 0 00:24:11.364 [2024-11-29 13:08:10.916436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.364 [2024-11-29 13:08:10.916442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.364 [2024-11-29 13:08:10.916447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.916450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27100) on tqpair=0xcc5690 00:24:11.364 [2024-11-29 13:08:10.916455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.916459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.364 [2024-11-29 13:08:10.916462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xcc5690) 00:24:11.365 [2024-11-29 13:08:10.916467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.365 [2024-11-29 13:08:10.916472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.916476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.916479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xcc5690) 00:24:11.365 [2024-11-29 13:08:10.916484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.365 [2024-11-29 13:08:10.916489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.916492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.916495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xcc5690) 00:24:11.365 [2024-11-29 13:08:10.916500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.365 [2024-11-29 13:08:10.916505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.916509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.916511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.365 [2024-11-29 13:08:10.916516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.365 [2024-11-29 13:08:10.916521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:11.365 [2024-11-29 13:08:10.916531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:11.365 [2024-11-29 13:08:10.916537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.916540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc5690) 00:24:11.365 [2024-11-29 13:08:10.916545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.365 [2024-11-29 13:08:10.916556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27100, cid 0, qid 0 00:24:11.365 [2024-11-29 13:08:10.916561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27280, cid 1, qid 0 00:24:11.365 [2024-11-29 13:08:10.916565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27400, cid 2, qid 0 00:24:11.365 [2024-11-29 13:08:10.916569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.365 [2024-11-29 13:08:10.916573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27700, cid 4, qid 0 00:24:11.365 [2024-11-29 13:08:10.916690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.365 [2024-11-29 13:08:10.916696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.365 [2024-11-29 13:08:10.916699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.916702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27700) on tqpair=0xcc5690 00:24:11.365 [2024-11-29 13:08:10.916706] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:11.365 [2024-11-29 13:08:10.916710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:11.365 [2024-11-29 13:08:10.916721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:11.365 [2024-11-29 13:08:10.916727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:11.365 [2024-11-29 13:08:10.916732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.916736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.916739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc5690) 00:24:11.365 [2024-11-29 13:08:10.916744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:11.365 [2024-11-29 13:08:10.916754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27700, cid 4, qid 0 00:24:11.365 [2024-11-29 13:08:10.916821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.365 [2024-11-29 13:08:10.916827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.365 [2024-11-29 13:08:10.916830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.916833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27700) on tqpair=0xcc5690 00:24:11.365 [2024-11-29 13:08:10.916884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:11.365 [2024-11-29 13:08:10.916893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:11.365 [2024-11-29 13:08:10.916900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.916903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc5690) 00:24:11.365 [2024-11-29 13:08:10.916909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.365 [2024-11-29 13:08:10.916919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27700, cid 4, qid 0 00:24:11.365 [2024-11-29 13:08:10.917008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:11.365 [2024-11-29 13:08:10.917015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:11.365 [2024-11-29 13:08:10.917018] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.917021] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc5690): datao=0, datal=4096, cccid=4 00:24:11.365 [2024-11-29 13:08:10.917025] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd27700) on tqpair(0xcc5690): expected_datao=0, payload_size=4096 00:24:11.365 [2024-11-29 13:08:10.917029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.917034] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.917038] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.917049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.365 [2024-11-29 13:08:10.917055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.365 [2024-11-29 13:08:10.917057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.917061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27700) on tqpair=0xcc5690 00:24:11.365 [2024-11-29 13:08:10.917072] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:11.365 [2024-11-29 13:08:10.917080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:11.365 [2024-11-29 13:08:10.917088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:11.365 [2024-11-29 13:08:10.917094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.365 [2024-11-29 13:08:10.917099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc5690) 00:24:11.365 [2024-11-29 13:08:10.917105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.365 [2024-11-29 13:08:10.917115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27700, cid 4, qid 0 00:24:11.365 [2024-11-29 13:08:10.917210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:11.365 [2024-11-29 13:08:10.917216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:11.365 [2024-11-29 13:08:10.917219] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917222] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc5690): datao=0, datal=4096, cccid=4 00:24:11.366 [2024-11-29 13:08:10.917226] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd27700) on tqpair(0xcc5690): expected_datao=0, payload_size=4096 00:24:11.366 [2024-11-29 13:08:10.917230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917235] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917238] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.366 [2024-11-29 13:08:10.917254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.366 [2024-11-29 13:08:10.917257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27700) on tqpair=0xcc5690 00:24:11.366 [2024-11-29 13:08:10.917270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:11.366 [2024-11-29 13:08:10.917278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:11.366 [2024-11-29 13:08:10.917285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc5690) 00:24:11.366 [2024-11-29 13:08:10.917294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.366 [2024-11-29 13:08:10.917304] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27700, cid 4, qid 0 00:24:11.366 [2024-11-29 13:08:10.917409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:11.366 [2024-11-29 13:08:10.917415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:11.366 [2024-11-29 13:08:10.917418] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917421] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc5690): datao=0, datal=4096, cccid=4 00:24:11.366 [2024-11-29 13:08:10.917425] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd27700) on tqpair(0xcc5690): expected_datao=0, payload_size=4096 00:24:11.366 [2024-11-29 13:08:10.917429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917434] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917437] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.366 [2024-11-29 13:08:10.917453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.366 [2024-11-29 13:08:10.917456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27700) on tqpair=0xcc5690 00:24:11.366 [2024-11-29 13:08:10.917469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:11.366 [2024-11-29 13:08:10.917479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:11.366 [2024-11-29 13:08:10.917486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:11.366 [2024-11-29 13:08:10.917491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:11.366 [2024-11-29 13:08:10.917496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:11.366 [2024-11-29 13:08:10.917501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:11.366 [2024-11-29 13:08:10.917506] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:11.366 [2024-11-29 13:08:10.917510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:11.366 [2024-11-29 13:08:10.917514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:11.366 [2024-11-29 13:08:10.917527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc5690) 00:24:11.366 [2024-11-29 13:08:10.917537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.366 [2024-11-29 13:08:10.917542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc5690) 00:24:11.366 [2024-11-29 13:08:10.917554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.366 [2024-11-29 13:08:10.917566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27700, cid 4, qid 0 00:24:11.366 [2024-11-29 13:08:10.917570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27880, cid 5, qid 0 00:24:11.366 [2024-11-29 13:08:10.917689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.366 [2024-11-29 13:08:10.917694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.366 [2024-11-29 13:08:10.917697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27700) on tqpair=0xcc5690 00:24:11.366 [2024-11-29 13:08:10.917706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.366 [2024-11-29 13:08:10.917711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.366 [2024-11-29 13:08:10.917714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27880) on tqpair=0xcc5690 00:24:11.366 [2024-11-29 13:08:10.917725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc5690) 00:24:11.366 [2024-11-29 13:08:10.917734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.366 [2024-11-29 13:08:10.917744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27880, cid 5, qid 0 00:24:11.366 [2024-11-29 13:08:10.917839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.366 [2024-11-29 13:08:10.917845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.366 [2024-11-29 13:08:10.917848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27880) on tqpair=0xcc5690 00:24:11.366 [2024-11-29 13:08:10.917862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.366 [2024-11-29 13:08:10.917866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc5690) 00:24:11.366 [2024-11-29 13:08:10.917871] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.366 [2024-11-29 13:08:10.917881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27880, cid 5, qid 0 00:24:11.366 [2024-11-29 13:08:10.917991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.366 [2024-11-29 13:08:10.917998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.366 [2024-11-29 13:08:10.918001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27880) on tqpair=0xcc5690 00:24:11.367 [2024-11-29 13:08:10.918012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc5690) 00:24:11.367 [2024-11-29 13:08:10.918021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.367 [2024-11-29 13:08:10.918030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27880, cid 5, qid 0 00:24:11.367 [2024-11-29 13:08:10.918095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.367 [2024-11-29 13:08:10.918101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.367 [2024-11-29 13:08:10.918104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27880) on tqpair=0xcc5690 00:24:11.367 [2024-11-29 13:08:10.918122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xcc5690) 00:24:11.367 [2024-11-29 13:08:10.918131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.367 [2024-11-29 13:08:10.918138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xcc5690) 00:24:11.367 [2024-11-29 13:08:10.918146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.367 [2024-11-29 13:08:10.918152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xcc5690) 00:24:11.367 [2024-11-29 13:08:10.918161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.367 [2024-11-29 13:08:10.918167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcc5690) 00:24:11.367 [2024-11-29 13:08:10.918176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.367 [2024-11-29 13:08:10.918186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27880, cid 5, qid 0 00:24:11.367 [2024-11-29 13:08:10.918191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27700, cid 4, qid 0 00:24:11.367 [2024-11-29 13:08:10.918195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27a00, cid 6, qid 0 00:24:11.367 [2024-11-29 13:08:10.918199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27b80, cid 7, qid 0 00:24:11.367 [2024-11-29 13:08:10.918335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:11.367 [2024-11-29 13:08:10.918342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:11.367 [2024-11-29 13:08:10.918345] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918348] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc5690): datao=0, datal=8192, cccid=5 00:24:11.367 [2024-11-29 13:08:10.918352] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd27880) on tqpair(0xcc5690): expected_datao=0, payload_size=8192 00:24:11.367 [2024-11-29 13:08:10.918356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918407] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918411] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:11.367 [2024-11-29 13:08:10.918421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:11.367 [2024-11-29 13:08:10.918424] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918427] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc5690): datao=0, datal=512, cccid=4 00:24:11.367 [2024-11-29 13:08:10.918430] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd27700) on tqpair(0xcc5690): expected_datao=0, payload_size=512 00:24:11.367 [2024-11-29 13:08:10.918434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918440] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918443] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:11.367 [2024-11-29 13:08:10.918453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:11.367 [2024-11-29 13:08:10.918455] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918458] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc5690): datao=0, datal=512, cccid=6 00:24:11.367 [2024-11-29 13:08:10.918462] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd27a00) on tqpair(0xcc5690): expected_datao=0, payload_size=512 00:24:11.367 [2024-11-29 13:08:10.918466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918471] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918475] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:11.367 [2024-11-29 13:08:10.918484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:11.367 [2024-11-29 13:08:10.918487] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918490] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xcc5690): datao=0, datal=4096, cccid=7 00:24:11.367 [2024-11-29 13:08:10.918494] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd27b80) on tqpair(0xcc5690): expected_datao=0, payload_size=4096 00:24:11.367 [2024-11-29 13:08:10.918497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918503] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918506] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.367 [2024-11-29 13:08:10.918518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.367 [2024-11-29 13:08:10.918521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27880) on tqpair=0xcc5690 00:24:11.367 [2024-11-29 13:08:10.918534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.367 [2024-11-29 13:08:10.918540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.367 [2024-11-29 13:08:10.918543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27700) on tqpair=0xcc5690 00:24:11.367 [2024-11-29 13:08:10.918556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.367 [2024-11-29 13:08:10.918561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.367 [2024-11-29 13:08:10.918564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.367 [2024-11-29 13:08:10.918567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27a00) on tqpair=0xcc5690 00:24:11.367 [2024-11-29 13:08:10.918573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.367 [2024-11-29 13:08:10.918578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.368 [2024-11-29 13:08:10.918581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.368 [2024-11-29 13:08:10.918584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27b80) on tqpair=0xcc5690 00:24:11.368 ===================================================== 00:24:11.368 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:11.368 ===================================================== 00:24:11.368 Controller Capabilities/Features 00:24:11.368 ================================ 00:24:11.368 Vendor ID: 8086 00:24:11.368 Subsystem Vendor ID: 8086 00:24:11.368 Serial Number: SPDK00000000000001 00:24:11.368 Model Number: SPDK bdev Controller 00:24:11.368 Firmware Version: 25.01 00:24:11.368 Recommended Arb Burst: 6 00:24:11.368 IEEE OUI Identifier: e4 d2 5c 00:24:11.368 Multi-path I/O 00:24:11.368 May have multiple subsystem ports: Yes 00:24:11.368 May have multiple controllers: Yes 00:24:11.368 Associated with SR-IOV VF: No 00:24:11.368 Max Data Transfer Size: 131072 00:24:11.368 Max Number of Namespaces: 32 00:24:11.368 Max Number of I/O Queues: 127 00:24:11.368 NVMe Specification Version (VS): 1.3 00:24:11.368 NVMe Specification Version (Identify): 1.3 00:24:11.368 Maximum Queue Entries: 128 00:24:11.368 Contiguous Queues Required: Yes 00:24:11.368 Arbitration Mechanisms Supported 00:24:11.368 Weighted Round Robin: Not Supported 00:24:11.368 Vendor Specific: Not Supported 00:24:11.368 Reset Timeout: 15000 ms 00:24:11.368 Doorbell Stride: 4 bytes 00:24:11.368 NVM Subsystem Reset: Not Supported 00:24:11.368 Command Sets Supported 00:24:11.368 NVM Command Set: Supported 00:24:11.368 Boot Partition: Not Supported 00:24:11.368 Memory Page Size Minimum: 4096 bytes 00:24:11.368 Memory Page Size Maximum: 4096 bytes 00:24:11.368 Persistent Memory Region: Not Supported 00:24:11.368 Optional Asynchronous Events Supported 00:24:11.368 Namespace Attribute Notices: Supported 00:24:11.368 Firmware Activation Notices: Not Supported 00:24:11.368 ANA Change Notices: Not Supported 00:24:11.368 PLE Aggregate Log Change Notices: Not Supported 00:24:11.368 LBA Status Info Alert Notices: Not Supported 00:24:11.368 EGE Aggregate Log Change Notices: Not Supported 00:24:11.368 Normal NVM Subsystem Shutdown event: Not Supported 00:24:11.368 Zone Descriptor Change Notices: Not Supported 00:24:11.368 Discovery Log Change Notices: Not Supported 00:24:11.368 Controller Attributes 00:24:11.368 128-bit Host Identifier: Supported 00:24:11.368 Non-Operational Permissive Mode: Not Supported 00:24:11.368 NVM Sets: Not Supported 00:24:11.368 Read Recovery Levels: Not Supported 00:24:11.368 Endurance Groups: Not Supported 00:24:11.368 Predictable Latency Mode: Not Supported 00:24:11.368 Traffic Based Keep ALive: Not Supported 00:24:11.368 Namespace Granularity: Not Supported 00:24:11.368 SQ Associations: Not Supported 00:24:11.368 UUID List: Not Supported 00:24:11.368 Multi-Domain Subsystem: Not Supported 00:24:11.368 Fixed Capacity Management: Not Supported 00:24:11.368 Variable Capacity Management: Not Supported 00:24:11.368 Delete Endurance Group: Not Supported 00:24:11.368 Delete NVM Set: Not Supported 00:24:11.368 Extended LBA Formats Supported: Not Supported 00:24:11.368 Flexible Data Placement Supported: Not Supported 00:24:11.368 00:24:11.368 Controller Memory Buffer Support 00:24:11.368 ================================ 00:24:11.368 Supported: No 00:24:11.368 00:24:11.368 Persistent Memory Region Support 00:24:11.368 ================================ 00:24:11.368 Supported: No 00:24:11.368 00:24:11.368 Admin Command Set Attributes 00:24:11.368 ============================ 00:24:11.368 Security Send/Receive: Not Supported 00:24:11.368 Format NVM: Not Supported 00:24:11.368 Firmware Activate/Download: Not Supported 00:24:11.368 Namespace Management: Not Supported 00:24:11.368 Device Self-Test: Not Supported 00:24:11.368 Directives: Not Supported 00:24:11.368 NVMe-MI: Not Supported 00:24:11.368 Virtualization Management: Not Supported 00:24:11.368 Doorbell Buffer Config: Not Supported 00:24:11.368 Get LBA Status Capability: Not Supported 00:24:11.368 Command & Feature Lockdown Capability: Not Supported 00:24:11.368 Abort Command Limit: 4 00:24:11.368 Async Event Request Limit: 4 00:24:11.368 Number of Firmware Slots: N/A 00:24:11.368 Firmware Slot 1 Read-Only: N/A 00:24:11.368 Firmware Activation Without Reset: N/A 00:24:11.368 Multiple Update Detection Support: N/A 00:24:11.368 Firmware Update Granularity: No Information Provided 00:24:11.368 Per-Namespace SMART Log: No 00:24:11.368 Asymmetric Namespace Access Log Page: Not Supported 00:24:11.368 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:11.368 Command Effects Log Page: Supported 00:24:11.368 Get Log Page Extended Data: Supported 00:24:11.368 Telemetry Log Pages: Not Supported 00:24:11.368 Persistent Event Log Pages: Not Supported 00:24:11.368 Supported Log Pages Log Page: May Support 00:24:11.368 Commands Supported & Effects Log Page: Not Supported 00:24:11.368 Feature Identifiers & Effects Log Page:May Support 00:24:11.368 NVMe-MI Commands & Effects Log Page: May Support 00:24:11.368 Data Area 4 for Telemetry Log: Not Supported 00:24:11.368 Error Log Page Entries Supported: 128 00:24:11.368 Keep Alive: Supported 00:24:11.368 Keep Alive Granularity: 10000 ms 00:24:11.368 00:24:11.368 NVM Command Set Attributes 00:24:11.368 ========================== 00:24:11.368 Submission Queue Entry Size 00:24:11.368 Max: 64 00:24:11.368 Min: 64 00:24:11.368 Completion Queue Entry Size 00:24:11.368 Max: 16 00:24:11.368 Min: 16 00:24:11.369 Number of Namespaces: 32 00:24:11.369 Compare Command: Supported 00:24:11.369 Write Uncorrectable Command: Not Supported 00:24:11.369 Dataset Management Command: Supported 00:24:11.369 Write Zeroes Command: Supported 00:24:11.369 Set Features Save Field: Not Supported 00:24:11.369 Reservations: Supported 00:24:11.369 Timestamp: Not Supported 00:24:11.369 Copy: Supported 00:24:11.369 Volatile Write Cache: Present 00:24:11.369 Atomic Write Unit (Normal): 1 00:24:11.369 Atomic Write Unit (PFail): 1 00:24:11.369 Atomic Compare & Write Unit: 1 00:24:11.369 Fused Compare & Write: Supported 00:24:11.369 Scatter-Gather List 00:24:11.369 SGL Command Set: Supported 00:24:11.369 SGL Keyed: Supported 00:24:11.369 SGL Bit Bucket Descriptor: Not Supported 00:24:11.369 SGL Metadata Pointer: Not Supported 00:24:11.369 Oversized SGL: Not Supported 00:24:11.369 SGL Metadata Address: Not Supported 00:24:11.369 SGL Offset: Supported 00:24:11.369 Transport SGL Data Block: Not Supported 00:24:11.369 Replay Protected Memory Block: Not Supported 00:24:11.369 00:24:11.369 Firmware Slot Information 00:24:11.369 ========================= 00:24:11.369 Active slot: 1 00:24:11.369 Slot 1 Firmware Revision: 25.01 00:24:11.369 00:24:11.369 00:24:11.369 Commands Supported and Effects 00:24:11.369 ============================== 00:24:11.369 Admin Commands 00:24:11.369 -------------- 00:24:11.369 Get Log Page (02h): Supported 00:24:11.369 Identify (06h): Supported 00:24:11.369 Abort (08h): Supported 00:24:11.369 Set Features (09h): Supported 00:24:11.369 Get Features (0Ah): Supported 00:24:11.369 Asynchronous Event Request (0Ch): Supported 00:24:11.369 Keep Alive (18h): Supported 00:24:11.369 I/O Commands 00:24:11.369 ------------ 00:24:11.369 Flush (00h): Supported LBA-Change 00:24:11.369 Write (01h): Supported LBA-Change 00:24:11.369 Read (02h): Supported 00:24:11.369 Compare (05h): Supported 00:24:11.369 Write Zeroes (08h): Supported LBA-Change 00:24:11.369 Dataset Management (09h): Supported LBA-Change 00:24:11.369 Copy (19h): Supported LBA-Change 00:24:11.369 00:24:11.369 Error Log 00:24:11.369 ========= 00:24:11.369 00:24:11.369 Arbitration 00:24:11.369 =========== 00:24:11.369 Arbitration Burst: 1 00:24:11.369 00:24:11.369 Power Management 00:24:11.369 ================ 00:24:11.369 Number of Power States: 1 00:24:11.369 Current Power State: Power State #0 00:24:11.369 Power State #0: 00:24:11.369 Max Power: 0.00 W 00:24:11.369 Non-Operational State: Operational 00:24:11.369 Entry Latency: Not Reported 00:24:11.369 Exit Latency: Not Reported 00:24:11.369 Relative Read Throughput: 0 00:24:11.369 Relative Read Latency: 0 00:24:11.369 Relative Write Throughput: 0 00:24:11.369 Relative Write Latency: 0 00:24:11.369 Idle Power: Not Reported 00:24:11.369 Active Power: Not Reported 00:24:11.369 Non-Operational Permissive Mode: Not Supported 00:24:11.369 00:24:11.369 Health Information 00:24:11.369 ================== 00:24:11.369 Critical Warnings: 00:24:11.369 Available Spare Space: OK 00:24:11.369 Temperature: OK 00:24:11.369 Device Reliability: OK 00:24:11.369 Read Only: No 00:24:11.369 Volatile Memory Backup: OK 00:24:11.369 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:11.369 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:11.369 Available Spare: 0% 00:24:11.369 Available Spare Threshold: 0% 00:24:11.369 Life Percentage Used:[2024-11-29 13:08:10.918663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.369 [2024-11-29 13:08:10.918668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xcc5690) 00:24:11.369 [2024-11-29 13:08:10.918673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.369 [2024-11-29 13:08:10.918684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27b80, cid 7, qid 0 00:24:11.369 [2024-11-29 13:08:10.918763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.369 [2024-11-29 13:08:10.918769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.369 [2024-11-29 13:08:10.918772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.369 [2024-11-29 13:08:10.918776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27b80) on tqpair=0xcc5690 00:24:11.369 [2024-11-29 13:08:10.918804] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:11.369 [2024-11-29 13:08:10.918814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27100) on tqpair=0xcc5690 00:24:11.369 [2024-11-29 13:08:10.918820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.369 [2024-11-29 13:08:10.918825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27280) on tqpair=0xcc5690 00:24:11.369 [2024-11-29 13:08:10.918829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.369 [2024-11-29 13:08:10.918833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27400) on tqpair=0xcc5690 00:24:11.369 [2024-11-29 13:08:10.918837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.369 [2024-11-29 13:08:10.918841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.369 [2024-11-29 13:08:10.918846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.369 [2024-11-29 13:08:10.918852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.369 [2024-11-29 13:08:10.918855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.369 [2024-11-29 13:08:10.918859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.369 [2024-11-29 13:08:10.918865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.369 [2024-11-29 13:08:10.918875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.370 [2024-11-29 13:08:10.922955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.370 [2024-11-29 13:08:10.922963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.370 [2024-11-29 13:08:10.922966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.922969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.370 [2024-11-29 13:08:10.922978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.922982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.922985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.370 [2024-11-29 13:08:10.922990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.370 [2024-11-29 13:08:10.923005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.370 [2024-11-29 13:08:10.923210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.370 [2024-11-29 13:08:10.923216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.370 [2024-11-29 13:08:10.923219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.370 [2024-11-29 13:08:10.923226] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:11.370 [2024-11-29 13:08:10.923231] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:11.370 [2024-11-29 13:08:10.923239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.370 [2024-11-29 13:08:10.923251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.370 [2024-11-29 13:08:10.923260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.370 [2024-11-29 13:08:10.923361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.370 [2024-11-29 13:08:10.923367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.370 [2024-11-29 13:08:10.923370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.370 [2024-11-29 13:08:10.923381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.370 [2024-11-29 13:08:10.923394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.370 [2024-11-29 13:08:10.923403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.370 [2024-11-29 13:08:10.923467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.370 [2024-11-29 13:08:10.923473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.370 [2024-11-29 13:08:10.923476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.370 [2024-11-29 13:08:10.923487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.370 [2024-11-29 13:08:10.923499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.370 [2024-11-29 13:08:10.923508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.370 [2024-11-29 13:08:10.923613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.370 [2024-11-29 13:08:10.923619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.370 [2024-11-29 13:08:10.923622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.370 [2024-11-29 13:08:10.923635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.370 [2024-11-29 13:08:10.923647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.370 [2024-11-29 13:08:10.923656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.370 [2024-11-29 13:08:10.923764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.370 [2024-11-29 13:08:10.923769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.370 [2024-11-29 13:08:10.923773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.370 [2024-11-29 13:08:10.923784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.370 [2024-11-29 13:08:10.923791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.370 [2024-11-29 13:08:10.923796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.371 [2024-11-29 13:08:10.923805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.371 [2024-11-29 13:08:10.923915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.371 [2024-11-29 13:08:10.923920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.371 [2024-11-29 13:08:10.923923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.923926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.371 [2024-11-29 13:08:10.923934] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.923938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.923941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.371 [2024-11-29 13:08:10.923950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.371 [2024-11-29 13:08:10.923960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.371 [2024-11-29 13:08:10.924022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.371 [2024-11-29 13:08:10.924028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.371 [2024-11-29 13:08:10.924031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.371 [2024-11-29 13:08:10.924042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.371 [2024-11-29 13:08:10.924054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.371 [2024-11-29 13:08:10.924063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.371 [2024-11-29 13:08:10.924167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.371 [2024-11-29 13:08:10.924172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.371 [2024-11-29 13:08:10.924175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.371 [2024-11-29 13:08:10.924188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.371 [2024-11-29 13:08:10.924200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.371 [2024-11-29 13:08:10.924210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.371 [2024-11-29 13:08:10.924317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.371 [2024-11-29 13:08:10.924323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.371 [2024-11-29 13:08:10.924326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.371 [2024-11-29 13:08:10.924337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.371 [2024-11-29 13:08:10.924349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.371 [2024-11-29 13:08:10.924359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.371 [2024-11-29 13:08:10.924469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.371 [2024-11-29 13:08:10.924474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.371 [2024-11-29 13:08:10.924477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.371 [2024-11-29 13:08:10.924488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.371 [2024-11-29 13:08:10.924500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.371 [2024-11-29 13:08:10.924510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.371 [2024-11-29 13:08:10.924578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.371 [2024-11-29 13:08:10.924583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.371 [2024-11-29 13:08:10.924586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.371 [2024-11-29 13:08:10.924597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.371 [2024-11-29 13:08:10.924610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.371 [2024-11-29 13:08:10.924619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.371 [2024-11-29 13:08:10.924722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.371 [2024-11-29 13:08:10.924727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.371 [2024-11-29 13:08:10.924730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.371 [2024-11-29 13:08:10.924741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.371 [2024-11-29 13:08:10.924755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.371 [2024-11-29 13:08:10.924764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.371 [2024-11-29 13:08:10.924873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.371 [2024-11-29 13:08:10.924879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.371 [2024-11-29 13:08:10.924882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.371 [2024-11-29 13:08:10.924893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.371 [2024-11-29 13:08:10.924900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.371 [2024-11-29 13:08:10.924905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.371 [2024-11-29 13:08:10.924915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.371 [2024-11-29 13:08:10.925024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.372 [2024-11-29 13:08:10.925030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.372 [2024-11-29 13:08:10.925033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.372 [2024-11-29 13:08:10.925044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.372 [2024-11-29 13:08:10.925057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.372 [2024-11-29 13:08:10.925066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.372 [2024-11-29 13:08:10.925131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.372 [2024-11-29 13:08:10.925136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.372 [2024-11-29 13:08:10.925139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.372 [2024-11-29 13:08:10.925151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.372 [2024-11-29 13:08:10.925163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.372 [2024-11-29 13:08:10.925172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.372 [2024-11-29 13:08:10.925275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.372 [2024-11-29 13:08:10.925281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.372 [2024-11-29 13:08:10.925284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.372 [2024-11-29 13:08:10.925295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.372 [2024-11-29 13:08:10.925308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.372 [2024-11-29 13:08:10.925318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.372 [2024-11-29 13:08:10.925427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.372 [2024-11-29 13:08:10.925432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.372 [2024-11-29 13:08:10.925435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.372 [2024-11-29 13:08:10.925447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.372 [2024-11-29 13:08:10.925459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.372 [2024-11-29 13:08:10.925468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.372 [2024-11-29 13:08:10.925578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.372 [2024-11-29 13:08:10.925584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.372 [2024-11-29 13:08:10.925587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.372 [2024-11-29 13:08:10.925598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.372 [2024-11-29 13:08:10.925610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.372 [2024-11-29 13:08:10.925619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.372 [2024-11-29 13:08:10.925690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.372 [2024-11-29 13:08:10.925695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.372 [2024-11-29 13:08:10.925698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.372 [2024-11-29 13:08:10.925710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925713] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.372 [2024-11-29 13:08:10.925722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.372 [2024-11-29 13:08:10.925732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.372 [2024-11-29 13:08:10.925830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.372 [2024-11-29 13:08:10.925836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.372 [2024-11-29 13:08:10.925839] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.372 [2024-11-29 13:08:10.925850] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.372 [2024-11-29 13:08:10.925864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.372 [2024-11-29 13:08:10.925873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.372 [2024-11-29 13:08:10.925982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.372 [2024-11-29 13:08:10.925988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.372 [2024-11-29 13:08:10.925991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.925994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.372 [2024-11-29 13:08:10.926002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.926005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.372 [2024-11-29 13:08:10.926008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.372 [2024-11-29 13:08:10.926014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.372 [2024-11-29 13:08:10.926024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.372 [2024-11-29 13:08:10.926133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.373 [2024-11-29 13:08:10.926139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.373 [2024-11-29 13:08:10.926142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.373 [2024-11-29 13:08:10.926153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.373 [2024-11-29 13:08:10.926165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.373 [2024-11-29 13:08:10.926175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.373 [2024-11-29 13:08:10.926236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.373 [2024-11-29 13:08:10.926242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.373 [2024-11-29 13:08:10.926245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.373 [2024-11-29 13:08:10.926256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.373 [2024-11-29 13:08:10.926268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.373 [2024-11-29 13:08:10.926278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.373 [2024-11-29 13:08:10.926384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.373 [2024-11-29 13:08:10.926389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.373 [2024-11-29 13:08:10.926392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.373 [2024-11-29 13:08:10.926403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.373 [2024-11-29 13:08:10.926415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.373 [2024-11-29 13:08:10.926426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.373 [2024-11-29 13:08:10.926535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.373 [2024-11-29 13:08:10.926540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.373 [2024-11-29 13:08:10.926543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.373 [2024-11-29 13:08:10.926554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.373 [2024-11-29 13:08:10.926566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.373 [2024-11-29 13:08:10.926576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.373 [2024-11-29 13:08:10.926686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.373 [2024-11-29 13:08:10.926692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.373 [2024-11-29 13:08:10.926695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.373 [2024-11-29 13:08:10.926706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.373 [2024-11-29 13:08:10.926718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.373 [2024-11-29 13:08:10.926727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.373 [2024-11-29 13:08:10.926786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.373 [2024-11-29 13:08:10.926792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.373 [2024-11-29 13:08:10.926795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.373 [2024-11-29 13:08:10.926806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.926813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.373 [2024-11-29 13:08:10.926818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.373 [2024-11-29 13:08:10.926827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.373 [2024-11-29 13:08:10.926938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.373 [2024-11-29 13:08:10.926944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.373 [2024-11-29 13:08:10.930951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.930957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.373 [2024-11-29 13:08:10.930968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.930972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.930975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xcc5690) 00:24:11.373 [2024-11-29 13:08:10.930980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.373 [2024-11-29 13:08:10.930991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd27580, cid 3, qid 0 00:24:11.373 [2024-11-29 13:08:10.931178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:11.373 [2024-11-29 13:08:10.931184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:11.373 [2024-11-29 13:08:10.931187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:11.373 [2024-11-29 13:08:10.931190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd27580) on tqpair=0xcc5690 00:24:11.373 [2024-11-29 13:08:10.931196] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:24:11.373 0% 00:24:11.373 Data Units Read: 0 00:24:11.373 Data Units Written: 0 00:24:11.373 Host Read Commands: 0 00:24:11.373 Host Write Commands: 0 00:24:11.373 Controller Busy Time: 0 minutes 00:24:11.373 Power Cycles: 0 00:24:11.373 Power On Hours: 0 hours 00:24:11.373 Unsafe Shutdowns: 0 00:24:11.373 Unrecoverable Media Errors: 0 00:24:11.374 Lifetime Error Log Entries: 0 00:24:11.374 Warning Temperature Time: 0 minutes 00:24:11.374 Critical Temperature Time: 0 minutes 00:24:11.374 00:24:11.374 Number of Queues 00:24:11.374 ================ 00:24:11.374 Number of I/O Submission Queues: 127 00:24:11.374 Number of I/O Completion Queues: 127 00:24:11.374 00:24:11.374 Active Namespaces 00:24:11.374 ================= 00:24:11.374 Namespace ID:1 00:24:11.374 Error Recovery Timeout: Unlimited 00:24:11.374 Command Set Identifier: NVM (00h) 00:24:11.374 Deallocate: Supported 00:24:11.374 Deallocated/Unwritten Error: Not Supported 00:24:11.374 Deallocated Read Value: Unknown 00:24:11.374 Deallocate in Write Zeroes: Not Supported 00:24:11.374 Deallocated Guard Field: 0xFFFF 00:24:11.374 Flush: Supported 00:24:11.374 Reservation: Supported 00:24:11.374 Namespace Sharing Capabilities: Multiple Controllers 00:24:11.374 Size (in LBAs): 131072 (0GiB) 00:24:11.374 Capacity (in LBAs): 131072 (0GiB) 00:24:11.374 Utilization (in LBAs): 131072 (0GiB) 00:24:11.374 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:11.374 EUI64: ABCDEF0123456789 00:24:11.374 UUID: 5e993518-87aa-4124-bb78-8b6b614c7133 00:24:11.374 Thin Provisioning: Not Supported 00:24:11.374 Per-NS Atomic Units: Yes 00:24:11.374 Atomic Boundary Size (Normal): 0 00:24:11.374 Atomic Boundary Size (PFail): 0 00:24:11.374 Atomic Boundary Offset: 0 00:24:11.374 Maximum Single Source Range Length: 65535 00:24:11.374 Maximum Copy Length: 65535 00:24:11.374 Maximum Source Range Count: 1 00:24:11.374 NGUID/EUI64 Never Reused: No 00:24:11.374 Namespace Write Protected: No 00:24:11.374 Number of LBA Formats: 1 00:24:11.374 Current LBA Format: LBA Format #00 00:24:11.374 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:11.374 00:24:11.374 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:11.374 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.374 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.374 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:11.374 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.374 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:11.374 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:11.374 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:11.374 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:11.374 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:11.374 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:11.374 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:11.374 13:08:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:11.374 rmmod nvme_tcp 00:24:11.374 rmmod nvme_fabrics 00:24:11.374 rmmod nvme_keyring 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2074691 ']' 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2074691 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2074691 ']' 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2074691 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2074691 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2074691' 00:24:11.374 killing process with pid 2074691 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2074691 00:24:11.374 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2074691 00:24:11.634 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.634 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:11.634 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:11.634 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:11.634 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:11.634 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:11.634 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:11.634 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:11.634 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:11.634 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.634 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.634 13:08:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.551 13:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:13.551 00:24:13.551 real 0m8.913s 00:24:13.551 user 0m5.184s 00:24:13.551 sys 0m4.610s 00:24:13.551 13:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.551 13:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:13.551 ************************************ 00:24:13.551 END TEST nvmf_identify 00:24:13.551 ************************************ 00:24:13.813 13:08:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:13.813 13:08:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:13.813 13:08:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:13.813 13:08:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.813 ************************************ 00:24:13.813 START TEST nvmf_perf 00:24:13.813 ************************************ 00:24:13.813 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:13.813 * Looking for test storage... 00:24:13.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:13.813 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:13.813 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:13.813 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:13.813 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:13.813 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:13.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.814 --rc genhtml_branch_coverage=1 00:24:13.814 --rc genhtml_function_coverage=1 00:24:13.814 --rc genhtml_legend=1 00:24:13.814 --rc geninfo_all_blocks=1 00:24:13.814 --rc geninfo_unexecuted_blocks=1 00:24:13.814 00:24:13.814 ' 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:13.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.814 --rc genhtml_branch_coverage=1 00:24:13.814 --rc genhtml_function_coverage=1 00:24:13.814 --rc genhtml_legend=1 00:24:13.814 --rc geninfo_all_blocks=1 00:24:13.814 --rc geninfo_unexecuted_blocks=1 00:24:13.814 00:24:13.814 ' 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:13.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.814 --rc genhtml_branch_coverage=1 00:24:13.814 --rc genhtml_function_coverage=1 00:24:13.814 --rc genhtml_legend=1 00:24:13.814 --rc geninfo_all_blocks=1 00:24:13.814 --rc geninfo_unexecuted_blocks=1 00:24:13.814 00:24:13.814 ' 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:13.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.814 --rc genhtml_branch_coverage=1 00:24:13.814 --rc genhtml_function_coverage=1 00:24:13.814 --rc genhtml_legend=1 00:24:13.814 --rc geninfo_all_blocks=1 00:24:13.814 --rc geninfo_unexecuted_blocks=1 00:24:13.814 00:24:13.814 ' 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.814 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:13.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:13.815 13:08:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.075 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.075 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:19.075 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:19.075 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:19.075 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:19.075 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:19.075 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:19.075 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:19.075 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:19.075 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:19.075 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:19.075 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:19.075 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:19.076 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:19.076 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:19.076 Found net devices under 0000:86:00.0: cvl_0_0 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:19.076 Found net devices under 0000:86:00.1: cvl_0_1 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:19.076 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:19.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:24:19.076 00:24:19.076 --- 10.0.0.2 ping statistics --- 00:24:19.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.077 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:24:19.077 00:24:19.077 --- 10.0.0.1 ping statistics --- 00:24:19.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.077 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2078231 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2078231 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2078231 ']' 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.077 [2024-11-29 13:08:18.677942] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:24:19.077 [2024-11-29 13:08:18.677999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.077 [2024-11-29 13:08:18.744403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.077 [2024-11-29 13:08:18.787461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.077 [2024-11-29 13:08:18.787497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.077 [2024-11-29 13:08:18.787505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.077 [2024-11-29 13:08:18.787511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.077 [2024-11-29 13:08:18.787515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.077 [2024-11-29 13:08:18.789038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.077 [2024-11-29 13:08:18.789134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.077 [2024-11-29 13:08:18.789228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:19.077 [2024-11-29 13:08:18.789229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:19.077 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.334 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.334 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:19.334 13:08:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:22.619 13:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:22.619 13:08:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:22.619 13:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:24:22.619 13:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:22.619 13:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:22.619 13:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:24:22.619 13:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:22.619 13:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:22.619 13:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:22.877 [2024-11-29 13:08:22.556909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.877 13:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.135 13:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:23.135 13:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:23.392 13:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:23.392 13:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:23.392 13:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.650 [2024-11-29 13:08:23.355910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.650 13:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:23.908 13:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:24:23.908 13:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:23.908 13:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:23.908 13:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:25.283 Initializing NVMe Controllers 00:24:25.283 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:24:25.283 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:24:25.283 Initialization complete. Launching workers. 00:24:25.283 ======================================================== 00:24:25.283 Latency(us) 00:24:25.283 Device Information : IOPS MiB/s Average min max 00:24:25.283 PCIE (0000:5e:00.0) NSID 1 from core 0: 97298.50 380.07 328.58 35.36 8256.39 00:24:25.283 ======================================================== 00:24:25.283 Total : 97298.50 380.07 328.58 35.36 8256.39 00:24:25.283 00:24:25.283 13:08:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:26.658 Initializing NVMe Controllers 00:24:26.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:26.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:26.658 Initialization complete. Launching workers. 00:24:26.658 ======================================================== 00:24:26.658 Latency(us) 00:24:26.658 Device Information : IOPS MiB/s Average min max 00:24:26.658 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 102.00 0.40 10097.93 113.79 45587.73 00:24:26.658 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19680.83 7182.58 47902.93 00:24:26.658 ======================================================== 00:24:26.658 Total : 153.00 0.60 13292.23 113.79 47902.93 00:24:26.658 00:24:26.658 13:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:28.033 Initializing NVMe Controllers 00:24:28.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:28.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:28.033 Initialization complete. Launching workers. 00:24:28.033 ======================================================== 00:24:28.033 Latency(us) 00:24:28.033 Device Information : IOPS MiB/s Average min max 00:24:28.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10715.22 41.86 2986.06 455.43 6242.37 00:24:28.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3808.57 14.88 8412.65 6296.89 15927.92 00:24:28.033 ======================================================== 00:24:28.033 Total : 14523.79 56.73 4409.07 455.43 15927.92 00:24:28.033 00:24:28.033 13:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:28.033 13:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:28.033 13:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:30.566 Initializing NVMe Controllers 00:24:30.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:30.566 Controller IO queue size 128, less than required. 00:24:30.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:30.566 Controller IO queue size 128, less than required. 00:24:30.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:30.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:30.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:30.566 Initialization complete. Launching workers. 00:24:30.566 ======================================================== 00:24:30.566 Latency(us) 00:24:30.566 Device Information : IOPS MiB/s Average min max 00:24:30.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1740.83 435.21 74588.57 40201.75 112796.90 00:24:30.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 614.73 153.68 220388.35 88219.77 302659.72 00:24:30.566 ======================================================== 00:24:30.566 Total : 2355.56 588.89 112638.08 40201.75 302659.72 00:24:30.566 00:24:30.566 13:08:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:30.825 No valid NVMe controllers or AIO or URING devices found 00:24:30.825 Initializing NVMe Controllers 00:24:30.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:30.825 Controller IO queue size 128, less than required. 00:24:30.825 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:30.825 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:30.825 Controller IO queue size 128, less than required. 00:24:30.825 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:30.825 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:30.825 WARNING: Some requested NVMe devices were skipped 00:24:30.825 13:08:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:33.354 Initializing NVMe Controllers 00:24:33.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:33.354 Controller IO queue size 128, less than required. 00:24:33.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:33.354 Controller IO queue size 128, less than required. 00:24:33.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:33.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:33.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:33.354 Initialization complete. Launching workers. 00:24:33.354 00:24:33.354 ==================== 00:24:33.354 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:33.354 TCP transport: 00:24:33.354 polls: 11327 00:24:33.354 idle_polls: 8131 00:24:33.354 sock_completions: 3196 00:24:33.354 nvme_completions: 6105 00:24:33.354 submitted_requests: 9280 00:24:33.354 queued_requests: 1 00:24:33.354 00:24:33.354 ==================== 00:24:33.354 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:33.354 TCP transport: 00:24:33.354 polls: 11857 00:24:33.354 idle_polls: 7679 00:24:33.355 sock_completions: 4178 00:24:33.355 nvme_completions: 6515 00:24:33.355 submitted_requests: 9810 00:24:33.355 queued_requests: 1 00:24:33.355 ======================================================== 00:24:33.355 Latency(us) 00:24:33.355 Device Information : IOPS MiB/s Average min max 00:24:33.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1525.76 381.44 85907.89 52586.60 150050.16 00:24:33.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1628.25 407.06 79921.82 46302.55 143804.02 00:24:33.355 ======================================================== 00:24:33.355 Total : 3154.01 788.50 82817.60 46302.55 150050.16 00:24:33.355 00:24:33.355 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:33.355 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.613 rmmod nvme_tcp 00:24:33.613 rmmod nvme_fabrics 00:24:33.613 rmmod nvme_keyring 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2078231 ']' 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2078231 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2078231 ']' 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2078231 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.613 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2078231 00:24:33.871 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.871 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.871 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2078231' 00:24:33.871 killing process with pid 2078231 00:24:33.871 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2078231 00:24:33.871 13:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2078231 00:24:35.245 13:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:35.245 13:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:35.245 13:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:35.245 13:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:35.245 13:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:35.245 13:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:35.245 13:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:35.245 13:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.245 13:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:35.245 13:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.245 13:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.245 13:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:37.778 00:24:37.778 real 0m23.597s 00:24:37.778 user 1m3.641s 00:24:37.778 sys 0m7.696s 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:37.778 ************************************ 00:24:37.778 END TEST nvmf_perf 00:24:37.778 ************************************ 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.778 ************************************ 00:24:37.778 START TEST nvmf_fio_host 00:24:37.778 ************************************ 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:37.778 * Looking for test storage... 00:24:37.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.778 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:37.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.778 --rc genhtml_branch_coverage=1 00:24:37.779 --rc genhtml_function_coverage=1 00:24:37.779 --rc genhtml_legend=1 00:24:37.779 --rc geninfo_all_blocks=1 00:24:37.779 --rc geninfo_unexecuted_blocks=1 00:24:37.779 00:24:37.779 ' 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.779 --rc genhtml_branch_coverage=1 00:24:37.779 --rc genhtml_function_coverage=1 00:24:37.779 --rc genhtml_legend=1 00:24:37.779 --rc geninfo_all_blocks=1 00:24:37.779 --rc geninfo_unexecuted_blocks=1 00:24:37.779 00:24:37.779 ' 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.779 --rc genhtml_branch_coverage=1 00:24:37.779 --rc genhtml_function_coverage=1 00:24:37.779 --rc genhtml_legend=1 00:24:37.779 --rc geninfo_all_blocks=1 00:24:37.779 --rc geninfo_unexecuted_blocks=1 00:24:37.779 00:24:37.779 ' 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:37.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.779 --rc genhtml_branch_coverage=1 00:24:37.779 --rc genhtml_function_coverage=1 00:24:37.779 --rc genhtml_legend=1 00:24:37.779 --rc geninfo_all_blocks=1 00:24:37.779 --rc geninfo_unexecuted_blocks=1 00:24:37.779 00:24:37.779 ' 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.779 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:37.780 13:08:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.042 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.042 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:43.042 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:43.042 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:43.042 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:43.042 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:43.043 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:43.043 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:43.043 Found net devices under 0000:86:00.0: cvl_0_0 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:43.043 Found net devices under 0000:86:00.1: cvl_0_1 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.043 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:43.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:24:43.301 00:24:43.301 --- 10.0.0.2 ping statistics --- 00:24:43.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.301 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:24:43.301 00:24:43.301 --- 10.0.0.1 ping statistics --- 00:24:43.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.301 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:43.301 13:08:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2084338 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2084338 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2084338 ']' 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.301 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.301 [2024-11-29 13:08:43.070463] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:24:43.301 [2024-11-29 13:08:43.070506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.561 [2024-11-29 13:08:43.136475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:43.561 [2024-11-29 13:08:43.180452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.561 [2024-11-29 13:08:43.180490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.561 [2024-11-29 13:08:43.180498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.561 [2024-11-29 13:08:43.180505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.561 [2024-11-29 13:08:43.180511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.561 [2024-11-29 13:08:43.182140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.561 [2024-11-29 13:08:43.182239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.561 [2024-11-29 13:08:43.182256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.561 [2024-11-29 13:08:43.182257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.561 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.561 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:43.561 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:43.821 [2024-11-29 13:08:43.457431] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.821 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:43.821 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:43.821 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.821 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:44.078 Malloc1 00:24:44.078 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.336 13:08:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:44.595 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.595 [2024-11-29 13:08:44.337940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.595 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:44.853 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:44.854 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:44.854 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:44.854 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:44.854 13:08:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:45.112 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:45.112 fio-3.35 00:24:45.112 Starting 1 thread 00:24:47.645 00:24:47.645 test: (groupid=0, jobs=1): err= 0: pid=2084730: Fri Nov 29 13:08:47 2024 00:24:47.645 read: IOPS=11.5k, BW=45.1MiB/s (47.3MB/s)(90.4MiB/2005msec) 00:24:47.645 slat (nsec): min=1579, max=255879, avg=1731.76, stdev=2313.54 00:24:47.645 clat (usec): min=3253, max=10530, avg=6133.44, stdev=477.77 00:24:47.645 lat (usec): min=3284, max=10531, avg=6135.17, stdev=477.72 00:24:47.645 clat percentiles (usec): 00:24:47.645 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:24:47.645 | 30.00th=[ 5866], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6259], 00:24:47.645 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:24:47.645 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 8586], 99.95th=[ 9372], 00:24:47.645 | 99.99th=[10028] 00:24:47.645 bw ( KiB/s): min=45384, max=46824, per=99.97%, avg=46154.00, stdev=619.35, samples=4 00:24:47.645 iops : min=11346, max=11706, avg=11538.50, stdev=154.84, samples=4 00:24:47.645 write: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(89.8MiB/2005msec); 0 zone resets 00:24:47.645 slat (nsec): min=1615, max=227842, avg=1785.91, stdev=1666.93 00:24:47.645 clat (usec): min=2469, max=9377, avg=4959.24, stdev=395.21 00:24:47.645 lat (usec): min=2484, max=9378, avg=4961.03, stdev=395.23 00:24:47.645 clat percentiles (usec): 00:24:47.645 | 1.00th=[ 4047], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4686], 00:24:47.645 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 5080], 00:24:47.645 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5538], 00:24:47.645 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 7832], 99.95th=[ 8848], 00:24:47.645 | 99.99th=[ 9372] 00:24:47.645 bw ( KiB/s): min=45520, max=46264, per=99.98%, avg=45850.00, stdev=315.86, samples=4 00:24:47.645 iops : min=11380, max=11566, avg=11462.50, stdev=78.97, samples=4 00:24:47.645 lat (msec) : 4=0.41%, 10=99.59%, 20=0.01% 00:24:47.645 cpu : usr=74.30%, sys=24.45%, ctx=41, majf=0, minf=2 00:24:47.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:47.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:47.645 issued rwts: total=23142,22987,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:47.645 00:24:47.645 Run status group 0 (all jobs): 00:24:47.645 READ: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=90.4MiB (94.8MB), run=2005-2005msec 00:24:47.645 WRITE: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=89.8MiB (94.2MB), run=2005-2005msec 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:47.645 13:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:47.903 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:47.903 fio-3.35 00:24:47.903 Starting 1 thread 00:24:50.432 00:24:50.432 test: (groupid=0, jobs=1): err= 0: pid=2085285: Fri Nov 29 13:08:49 2024 00:24:50.432 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(340MiB/2006msec) 00:24:50.432 slat (usec): min=2, max=102, avg= 2.86, stdev= 1.33 00:24:50.432 clat (usec): min=1679, max=14703, avg=6898.24, stdev=1533.18 00:24:50.432 lat (usec): min=1681, max=14706, avg=6901.10, stdev=1533.30 00:24:50.432 clat percentiles (usec): 00:24:50.432 | 1.00th=[ 3720], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5538], 00:24:50.432 | 30.00th=[ 5997], 40.00th=[ 6521], 50.00th=[ 6980], 60.00th=[ 7439], 00:24:50.432 | 70.00th=[ 7767], 80.00th=[ 8094], 90.00th=[ 8848], 95.00th=[ 9372], 00:24:50.432 | 99.00th=[10683], 99.50th=[11076], 99.90th=[12387], 99.95th=[12518], 00:24:50.432 | 99.99th=[14615] 00:24:50.432 bw ( KiB/s): min=79616, max=95872, per=50.21%, avg=87032.00, stdev=6821.25, samples=4 00:24:50.432 iops : min= 4976, max= 5992, avg=5439.50, stdev=426.33, samples=4 00:24:50.432 write: IOPS=6487, BW=101MiB/s (106MB/s)(178MiB/1751msec); 0 zone resets 00:24:50.432 slat (usec): min=29, max=390, avg=31.95, stdev= 7.31 00:24:50.432 clat (usec): min=2158, max=14968, avg=8750.43, stdev=1522.11 00:24:50.432 lat (usec): min=2188, max=14998, avg=8782.39, stdev=1523.43 00:24:50.432 clat percentiles (usec): 00:24:50.432 | 1.00th=[ 5932], 5.00th=[ 6587], 10.00th=[ 7111], 20.00th=[ 7570], 00:24:50.432 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8848], 00:24:50.432 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10945], 95.00th=[11600], 00:24:50.432 | 99.00th=[12780], 99.50th=[13829], 99.90th=[14484], 99.95th=[14615], 00:24:50.432 | 99.99th=[14877] 00:24:50.432 bw ( KiB/s): min=81792, max=99712, per=87.07%, avg=90384.00, stdev=7329.30, samples=4 00:24:50.432 iops : min= 5112, max= 6232, avg=5649.00, stdev=458.08, samples=4 00:24:50.432 lat (msec) : 2=0.03%, 4=1.39%, 10=90.64%, 20=7.94% 00:24:50.432 cpu : usr=85.39%, sys=13.82%, ctx=24, majf=0, minf=2 00:24:50.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:50.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:50.432 issued rwts: total=21732,11360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:50.432 00:24:50.432 Run status group 0 (all jobs): 00:24:50.432 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=340MiB (356MB), run=2006-2006msec 00:24:50.432 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=178MiB (186MB), run=1751-1751msec 00:24:50.432 13:08:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.432 rmmod nvme_tcp 00:24:50.432 rmmod nvme_fabrics 00:24:50.432 rmmod nvme_keyring 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2084338 ']' 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2084338 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2084338 ']' 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2084338 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2084338 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2084338' 00:24:50.432 killing process with pid 2084338 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2084338 00:24:50.432 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2084338 00:24:50.690 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:50.690 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:50.690 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:50.690 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:50.690 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:50.690 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:50.690 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:50.690 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.690 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:50.690 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.690 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.690 13:08:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.222 13:08:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.222 00:24:53.222 real 0m15.439s 00:24:53.222 user 0m46.047s 00:24:53.222 sys 0m6.199s 00:24:53.222 13:08:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.223 ************************************ 00:24:53.223 END TEST nvmf_fio_host 00:24:53.223 ************************************ 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.223 ************************************ 00:24:53.223 START TEST nvmf_failover 00:24:53.223 ************************************ 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:53.223 * Looking for test storage... 00:24:53.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:53.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.223 --rc genhtml_branch_coverage=1 00:24:53.223 --rc genhtml_function_coverage=1 00:24:53.223 --rc genhtml_legend=1 00:24:53.223 --rc geninfo_all_blocks=1 00:24:53.223 --rc geninfo_unexecuted_blocks=1 00:24:53.223 00:24:53.223 ' 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:53.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.223 --rc genhtml_branch_coverage=1 00:24:53.223 --rc genhtml_function_coverage=1 00:24:53.223 --rc genhtml_legend=1 00:24:53.223 --rc geninfo_all_blocks=1 00:24:53.223 --rc geninfo_unexecuted_blocks=1 00:24:53.223 00:24:53.223 ' 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:53.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.223 --rc genhtml_branch_coverage=1 00:24:53.223 --rc genhtml_function_coverage=1 00:24:53.223 --rc genhtml_legend=1 00:24:53.223 --rc geninfo_all_blocks=1 00:24:53.223 --rc geninfo_unexecuted_blocks=1 00:24:53.223 00:24:53.223 ' 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:53.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.223 --rc genhtml_branch_coverage=1 00:24:53.223 --rc genhtml_function_coverage=1 00:24:53.223 --rc genhtml_legend=1 00:24:53.223 --rc geninfo_all_blocks=1 00:24:53.223 --rc geninfo_unexecuted_blocks=1 00:24:53.223 00:24:53.223 ' 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.223 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:53.224 13:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:58.487 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:58.487 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:58.487 Found net devices under 0000:86:00.0: cvl_0_0 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:58.487 Found net devices under 0000:86:00.1: cvl_0_1 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:58.487 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:24:58.488 00:24:58.488 --- 10.0.0.2 ping statistics --- 00:24:58.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.488 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:24:58.488 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:24:58.746 00:24:58.746 --- 10.0.0.1 ping statistics --- 00:24:58.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.746 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2089252 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2089252 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2089252 ']' 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.746 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:58.746 [2024-11-29 13:08:58.407603] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:24:58.746 [2024-11-29 13:08:58.407654] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.746 [2024-11-29 13:08:58.475678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:58.746 [2024-11-29 13:08:58.517630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.746 [2024-11-29 13:08:58.517670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.746 [2024-11-29 13:08:58.517677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.746 [2024-11-29 13:08:58.517682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.746 [2024-11-29 13:08:58.517688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.746 [2024-11-29 13:08:58.519059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.746 [2024-11-29 13:08:58.519084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.746 [2024-11-29 13:08:58.519087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.006 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.006 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:59.006 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:59.006 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:59.006 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.006 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.006 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:59.006 [2024-11-29 13:08:58.821710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.264 13:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:59.264 Malloc0 00:24:59.264 13:08:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:59.522 13:08:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:59.781 13:08:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.039 [2024-11-29 13:08:59.650021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.039 13:08:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:00.039 [2024-11-29 13:08:59.854591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:00.298 13:08:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:00.298 [2024-11-29 13:09:00.055247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:00.298 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:00.298 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2089525 00:25:00.298 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:00.298 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2089525 /var/tmp/bdevperf.sock 00:25:00.298 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2089525 ']' 00:25:00.298 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.298 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.298 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.298 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.298 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.557 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.557 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:00.557 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:01.124 NVMe0n1 00:25:01.124 13:09:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:01.382 00:25:01.382 13:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:01.382 13:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2089727 00:25:01.382 13:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:02.318 13:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.578 [2024-11-29 13:09:02.255119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.578 [2024-11-29 13:09:02.255333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 [2024-11-29 13:09:02.255647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde92d0 is same with the state(6) to be set 00:25:02.579 13:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:05.867 13:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:05.867 00:25:05.867 13:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:06.125 13:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:09.412 13:09:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:09.412 [2024-11-29 13:09:09.058567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.412 13:09:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:10.347 13:09:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:10.605 [2024-11-29 13:09:10.287791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeace0 is same with the state(6) to be set 00:25:10.605 [2024-11-29 13:09:10.287831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeace0 is same with the state(6) to be set 00:25:10.605 [2024-11-29 13:09:10.287839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeace0 is same with the state(6) to be set 00:25:10.605 [2024-11-29 13:09:10.287846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeace0 is same with the state(6) to be set 00:25:10.605 [2024-11-29 13:09:10.287852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeace0 is same with the state(6) to be set 00:25:10.605 [2024-11-29 13:09:10.287858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeace0 is same with the state(6) to be set 00:25:10.605 [2024-11-29 13:09:10.287865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeace0 is same with the state(6) to be set 00:25:10.605 [2024-11-29 13:09:10.287871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeace0 is same with the state(6) to be set 00:25:10.605 [2024-11-29 13:09:10.287877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeace0 is same with the state(6) to be set 00:25:10.605 [2024-11-29 13:09:10.287892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeace0 is same with the state(6) to be set 00:25:10.605 13:09:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2089727 00:25:17.313 { 00:25:17.313 "results": [ 00:25:17.313 { 00:25:17.313 "job": "NVMe0n1", 00:25:17.313 "core_mask": "0x1", 00:25:17.313 "workload": "verify", 00:25:17.313 "status": "finished", 00:25:17.313 "verify_range": { 00:25:17.313 "start": 0, 00:25:17.313 "length": 16384 00:25:17.313 }, 00:25:17.313 "queue_depth": 128, 00:25:17.313 "io_size": 4096, 00:25:17.313 "runtime": 15.008564, 00:25:17.313 "iops": 10637.993081816488, 00:25:17.313 "mibps": 41.55466047584566, 00:25:17.313 "io_failed": 12021, 00:25:17.313 "io_timeout": 0, 00:25:17.313 "avg_latency_us": 11166.494300129209, 00:25:17.313 "min_latency_us": 439.8747826086956, 00:25:17.313 "max_latency_us": 21769.34956521739 00:25:17.313 } 00:25:17.313 ], 00:25:17.313 "core_count": 1 00:25:17.313 } 00:25:17.313 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2089525 00:25:17.313 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2089525 ']' 00:25:17.313 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2089525 00:25:17.313 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:17.313 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.313 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2089525 00:25:17.313 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:17.313 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:17.313 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2089525' 00:25:17.313 killing process with pid 2089525 00:25:17.313 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2089525 00:25:17.313 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2089525 00:25:17.313 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:17.313 [2024-11-29 13:09:00.121441] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:25:17.313 [2024-11-29 13:09:00.121494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089525 ] 00:25:17.313 [2024-11-29 13:09:00.185346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.313 [2024-11-29 13:09:00.228157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.313 Running I/O for 15 seconds... 00:25:17.313 10737.00 IOPS, 41.94 MiB/s [2024-11-29T12:09:17.133Z] [2024-11-29 13:09:02.257138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.313 [2024-11-29 13:09:02.257429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.313 [2024-11-29 13:09:02.257460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.313 [2024-11-29 13:09:02.257468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.257989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.257997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.258004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.258012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.258018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.258026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.258033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.258041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.258048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.314 [2024-11-29 13:09:02.258056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.314 [2024-11-29 13:09:02.258063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.315 [2024-11-29 13:09:02.258078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.315 [2024-11-29 13:09:02.258092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.315 [2024-11-29 13:09:02.258108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.315 [2024-11-29 13:09:02.258124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.315 [2024-11-29 13:09:02.258138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.315 [2024-11-29 13:09:02.258661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.315 [2024-11-29 13:09:02.258669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.316 [2024-11-29 13:09:02.258867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.258898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96576 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.258905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.258919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.258925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96584 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.258931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.258944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.258954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96592 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.258960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.258972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.258977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96600 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.258983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.258990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.258996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.259001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96608 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.259008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.259014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.259019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.259024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96616 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.259030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.259038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.259043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.259048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96624 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.259055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.259062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.259067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.259072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96632 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.259081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.259087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.259092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.259100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96640 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.259106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.259113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.259118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.259123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96648 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.259129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.259136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.259141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.259146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.259153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.259160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.259165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.259170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.259176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.259183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.259188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.259193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.259199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.259206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.259211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.259216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.259222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.269943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.269959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.269968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0 00:25:17.316 [2024-11-29 13:09:02.269976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.316 [2024-11-29 13:09:02.269984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.316 [2024-11-29 13:09:02.269990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.316 [2024-11-29 13:09:02.269999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96696 len:8 PRP1 0x0 PRP2 0x0 00:25:17.317 [2024-11-29 13:09:02.270007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:02.270057] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:17.317 [2024-11-29 13:09:02.270082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.317 [2024-11-29 13:09:02.270092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:02.270102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.317 [2024-11-29 13:09:02.270110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:02.270118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.317 [2024-11-29 13:09:02.270126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:02.270134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.317 [2024-11-29 13:09:02.270142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:02.270151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:17.317 [2024-11-29 13:09:02.270182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca6370 (9): Bad file descriptor 00:25:17.317 [2024-11-29 13:09:02.273685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:17.317 [2024-11-29 13:09:02.432078] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:17.317 9825.00 IOPS, 38.38 MiB/s [2024-11-29T12:09:17.137Z] 10191.67 IOPS, 39.81 MiB/s [2024-11-29T12:09:17.137Z] 10349.25 IOPS, 40.43 MiB/s [2024-11-29T12:09:17.137Z] [2024-11-29 13:09:05.827193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.317 [2024-11-29 13:09:05.827234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.317 [2024-11-29 13:09:05.827257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.317 [2024-11-29 13:09:05.827273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.317 [2024-11-29 13:09:05.827288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.317 [2024-11-29 13:09:05.827303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.317 [2024-11-29 13:09:05.827323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.317 [2024-11-29 13:09:05.827339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.317 [2024-11-29 13:09:05.827354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.317 [2024-11-29 13:09:05.827369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.317 [2024-11-29 13:09:05.827611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.317 [2024-11-29 13:09:05.827619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.317 [2024-11-29 13:09:05.827625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.318 [2024-11-29 13:09:05.827640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.318 [2024-11-29 13:09:05.827656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.318 [2024-11-29 13:09:05.827670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.318 [2024-11-29 13:09:05.827685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.318 [2024-11-29 13:09:05.827701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.827987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.827994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.318 [2024-11-29 13:09:05.828214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.318 [2024-11-29 13:09:05.828223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.319 [2024-11-29 13:09:05.828791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.319 [2024-11-29 13:09:05.828806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.319 [2024-11-29 13:09:05.828820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.319 [2024-11-29 13:09:05.828828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.828835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.828845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.828851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.828860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.828867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.828875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.828881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.828889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.828896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.828904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.828910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.828918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.828924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.828933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.828940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.828953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.828960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.828968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.828975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.828984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.828990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.828998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.829005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.829020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.829036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.829050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.829065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.829079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.829094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.829109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.829123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.829138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:05.829152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd4d10 is same with the state(6) to be set 00:25:17.320 [2024-11-29 13:09:05.829168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.320 [2024-11-29 13:09:05.829174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.320 [2024-11-29 13:09:05.829180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55440 len:8 PRP1 0x0 PRP2 0x0 00:25:17.320 [2024-11-29 13:09:05.829188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829234] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:17.320 [2024-11-29 13:09:05.829258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.320 [2024-11-29 13:09:05.829265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.320 [2024-11-29 13:09:05.829282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.320 [2024-11-29 13:09:05.829295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.320 [2024-11-29 13:09:05.829309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:05.829316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:17.320 [2024-11-29 13:09:05.832222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:17.320 [2024-11-29 13:09:05.832254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca6370 (9): Bad file descriptor 00:25:17.320 [2024-11-29 13:09:05.898799] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:17.320 10257.80 IOPS, 40.07 MiB/s [2024-11-29T12:09:17.140Z] 10368.17 IOPS, 40.50 MiB/s [2024-11-29T12:09:17.140Z] 10441.57 IOPS, 40.79 MiB/s [2024-11-29T12:09:17.140Z] 10508.38 IOPS, 41.05 MiB/s [2024-11-29T12:09:17.140Z] 10557.22 IOPS, 41.24 MiB/s [2024-11-29T12:09:17.140Z] [2024-11-29 13:09:10.290438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:10.290473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:10.290490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:10.290499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:10.290508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:10.290515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:10.290524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:10.290530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:10.290539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:10.290546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:10.290555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:10.290561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:10.290569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:10.290576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:10.290584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:10.290592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.320 [2024-11-29 13:09:10.290604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.320 [2024-11-29 13:09:10.290611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.321 [2024-11-29 13:09:10.290627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.321 [2024-11-29 13:09:10.290642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.321 [2024-11-29 13:09:10.290657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.290987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.290996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.321 [2024-11-29 13:09:10.291210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.321 [2024-11-29 13:09:10.291217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:17.322 [2024-11-29 13:09:10.291630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.322 [2024-11-29 13:09:10.291660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71000 len:8 PRP1 0x0 PRP2 0x0 00:25:17.322 [2024-11-29 13:09:10.291667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.322 [2024-11-29 13:09:10.291682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.322 [2024-11-29 13:09:10.291688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71008 len:8 PRP1 0x0 PRP2 0x0 00:25:17.322 [2024-11-29 13:09:10.291694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.322 [2024-11-29 13:09:10.291706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.322 [2024-11-29 13:09:10.291712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71016 len:8 PRP1 0x0 PRP2 0x0 00:25:17.322 [2024-11-29 13:09:10.291718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.322 [2024-11-29 13:09:10.291725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.322 [2024-11-29 13:09:10.291730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.322 [2024-11-29 13:09:10.291735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71024 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.291742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.291748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.291753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.291760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71032 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.291767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.291774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.291779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.291785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71040 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.291791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.291798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.291803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.291809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71048 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.291816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.291822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.291827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.291832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71056 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.291839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.291845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.291850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.291855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71064 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.291862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.291869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.291873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.291879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71072 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.291885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.291891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.291899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.291904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71080 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.291911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.291918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.291924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.291929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71088 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.291936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.291944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.291954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.291960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71096 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.291966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.291973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.291978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.291984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71104 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.291990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.291997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.292002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.292007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71112 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.292013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.292020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.292025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.292030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71120 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.292036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.292043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.292048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.292053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71128 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.292059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.292067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.292073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.292078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71136 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.292084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.292092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.292099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.292104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71144 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.292111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.292117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.292124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.292130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71152 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.292137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.292145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.292150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.292156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71160 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.292162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.292169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.292174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.292179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71168 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.292185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.292191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.292197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.292202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71176 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.292209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.292217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.292222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.292228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71184 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.292235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.292242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.292247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.292252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71192 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.292259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.292265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.292270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.292276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71200 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.292282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.292289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.323 [2024-11-29 13:09:10.292295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.323 [2024-11-29 13:09:10.292301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71208 len:8 PRP1 0x0 PRP2 0x0 00:25:17.323 [2024-11-29 13:09:10.292307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.323 [2024-11-29 13:09:10.292314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.292319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.292329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71216 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.292335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.292342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.292347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.292352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71224 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.292358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.292365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.292370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.292376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71232 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.292383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.292389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.292394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.292399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71240 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.292405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.292412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.292419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.292425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71248 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.292431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.292437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.292442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.292448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71256 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.292454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.292460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.292465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.292471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71264 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.292477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.292484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.292490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.292496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71272 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.292503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.292510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.292516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.292522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71280 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.292529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.292536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.292541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.292546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71288 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.292552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.292559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.292564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.292569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71296 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.292576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.292583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.292587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.292593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71304 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.292599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.292606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.292611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.292616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71312 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.292623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.303008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.303020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.303030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71320 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.303038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.303048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.303055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.303062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71328 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.303071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.303080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.303088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.303095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71336 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.303104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.303115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.303123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.303130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71344 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.303139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.303148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.303155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.303162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71352 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.303172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.303181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.303188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.303195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71360 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.303204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.303213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.303220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.303227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71368 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.303236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.303245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.303252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.303259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71376 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.303268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.303277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.303283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.303291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71384 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.303299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.303308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.324 [2024-11-29 13:09:10.303315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.324 [2024-11-29 13:09:10.303322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71392 len:8 PRP1 0x0 PRP2 0x0 00:25:17.324 [2024-11-29 13:09:10.303331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.324 [2024-11-29 13:09:10.303340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.325 [2024-11-29 13:09:10.303347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.325 [2024-11-29 13:09:10.303355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71400 len:8 PRP1 0x0 PRP2 0x0 00:25:17.325 [2024-11-29 13:09:10.303365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.325 [2024-11-29 13:09:10.303415] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:17.325 [2024-11-29 13:09:10.303443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.325 [2024-11-29 13:09:10.303453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.325 [2024-11-29 13:09:10.303463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.325 [2024-11-29 13:09:10.303472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.325 [2024-11-29 13:09:10.303481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.325 [2024-11-29 13:09:10.303491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.325 [2024-11-29 13:09:10.303500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.325 [2024-11-29 13:09:10.303509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.325 [2024-11-29 13:09:10.303518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:17.325 [2024-11-29 13:09:10.303557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca6370 (9): Bad file descriptor 00:25:17.325 [2024-11-29 13:09:10.307447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:17.325 [2024-11-29 13:09:10.336140] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:17.325 10557.90 IOPS, 41.24 MiB/s [2024-11-29T12:09:17.145Z] 10572.91 IOPS, 41.30 MiB/s [2024-11-29T12:09:17.145Z] 10589.83 IOPS, 41.37 MiB/s [2024-11-29T12:09:17.145Z] 10613.85 IOPS, 41.46 MiB/s [2024-11-29T12:09:17.145Z] 10633.07 IOPS, 41.54 MiB/s [2024-11-29T12:09:17.145Z] 10642.73 IOPS, 41.57 MiB/s 00:25:17.325 Latency(us) 00:25:17.325 [2024-11-29T12:09:17.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.325 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.325 Verification LBA range: start 0x0 length 0x4000 00:25:17.325 NVMe0n1 : 15.01 10637.99 41.55 800.94 0.00 11166.49 439.87 21769.35 00:25:17.325 [2024-11-29T12:09:17.145Z] =================================================================================================================== 00:25:17.325 [2024-11-29T12:09:17.145Z] Total : 10637.99 41.55 800.94 0.00 11166.49 439.87 21769.35 00:25:17.325 Received shutdown signal, test time was about 15.000000 seconds 00:25:17.325 00:25:17.325 Latency(us) 00:25:17.325 [2024-11-29T12:09:17.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.325 [2024-11-29T12:09:17.145Z] =================================================================================================================== 00:25:17.325 [2024-11-29T12:09:17.145Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2092566 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2092566 /var/tmp/bdevperf.sock 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2092566 ']' 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:17.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:17.325 [2024-11-29 13:09:16.862223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:17.325 13:09:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:17.325 [2024-11-29 13:09:17.054774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:17.325 13:09:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.953 NVMe0n1 00:25:17.953 13:09:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:18.212 00:25:18.212 13:09:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:18.471 00:25:18.471 13:09:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.471 13:09:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:18.730 13:09:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.730 13:09:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:22.015 13:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.015 13:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:22.015 13:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:22.015 13:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2093493 00:25:22.015 13:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2093493 00:25:23.393 { 00:25:23.393 "results": [ 00:25:23.393 { 00:25:23.393 "job": "NVMe0n1", 00:25:23.393 "core_mask": "0x1", 00:25:23.393 "workload": "verify", 00:25:23.393 "status": "finished", 00:25:23.393 "verify_range": { 00:25:23.393 "start": 0, 00:25:23.393 "length": 16384 00:25:23.393 }, 00:25:23.393 "queue_depth": 128, 00:25:23.393 "io_size": 4096, 00:25:23.393 "runtime": 1.011313, 00:25:23.393 "iops": 10691.052127284036, 00:25:23.393 "mibps": 41.761922372203266, 00:25:23.393 "io_failed": 0, 00:25:23.393 "io_timeout": 0, 00:25:23.393 "avg_latency_us": 11927.679289678135, 00:25:23.393 "min_latency_us": 2635.686956521739, 00:25:23.393 "max_latency_us": 11625.51652173913 00:25:23.393 } 00:25:23.393 ], 00:25:23.393 "core_count": 1 00:25:23.393 } 00:25:23.393 13:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:23.393 [2024-11-29 13:09:16.487397] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:25:23.393 [2024-11-29 13:09:16.487449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2092566 ] 00:25:23.393 [2024-11-29 13:09:16.550368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.393 [2024-11-29 13:09:16.588271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.393 [2024-11-29 13:09:18.495906] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:23.393 [2024-11-29 13:09:18.495956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.393 [2024-11-29 13:09:18.495967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.393 [2024-11-29 13:09:18.495976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.393 [2024-11-29 13:09:18.495983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.393 [2024-11-29 13:09:18.495990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.393 [2024-11-29 13:09:18.495997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.393 [2024-11-29 13:09:18.496004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.393 [2024-11-29 13:09:18.496010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.393 [2024-11-29 13:09:18.496018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:23.393 [2024-11-29 13:09:18.496043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:23.393 [2024-11-29 13:09:18.496058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b2370 (9): Bad file descriptor 00:25:23.393 [2024-11-29 13:09:18.588123] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:23.393 Running I/O for 1 seconds... 00:25:23.393 10683.00 IOPS, 41.73 MiB/s 00:25:23.393 Latency(us) 00:25:23.393 [2024-11-29T12:09:23.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.393 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:23.393 Verification LBA range: start 0x0 length 0x4000 00:25:23.393 NVMe0n1 : 1.01 10691.05 41.76 0.00 0.00 11927.68 2635.69 11625.52 00:25:23.393 [2024-11-29T12:09:23.213Z] =================================================================================================================== 00:25:23.393 [2024-11-29T12:09:23.213Z] Total : 10691.05 41.76 0.00 0.00 11927.68 2635.69 11625.52 00:25:23.393 13:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:23.393 13:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:23.393 13:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.652 13:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:23.652 13:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:23.652 13:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.911 13:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:27.196 13:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:27.196 13:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:27.196 13:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2092566 00:25:27.196 13:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2092566 ']' 00:25:27.196 13:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2092566 00:25:27.196 13:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:27.196 13:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.196 13:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2092566 00:25:27.197 13:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:27.197 13:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:27.197 13:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2092566' 00:25:27.197 killing process with pid 2092566 00:25:27.197 13:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2092566 00:25:27.197 13:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2092566 00:25:27.455 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:27.455 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:27.714 rmmod nvme_tcp 00:25:27.714 rmmod nvme_fabrics 00:25:27.714 rmmod nvme_keyring 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2089252 ']' 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2089252 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2089252 ']' 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2089252 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2089252 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2089252' 00:25:27.714 killing process with pid 2089252 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2089252 00:25:27.714 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2089252 00:25:27.973 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:27.973 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:27.973 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:27.973 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:27.973 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:27.973 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:27.973 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:27.973 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:27.973 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:27.973 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.973 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.973 13:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.877 13:09:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:29.877 00:25:29.877 real 0m37.067s 00:25:29.877 user 1m58.479s 00:25:29.877 sys 0m7.637s 00:25:29.877 13:09:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:29.877 13:09:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:29.877 ************************************ 00:25:29.877 END TEST nvmf_failover 00:25:29.877 ************************************ 00:25:29.877 13:09:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:29.877 13:09:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:29.877 13:09:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:29.877 13:09:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.136 ************************************ 00:25:30.136 START TEST nvmf_host_discovery 00:25:30.136 ************************************ 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:30.136 * Looking for test storage... 00:25:30.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:30.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.136 --rc genhtml_branch_coverage=1 00:25:30.136 --rc genhtml_function_coverage=1 00:25:30.136 --rc genhtml_legend=1 00:25:30.136 --rc geninfo_all_blocks=1 00:25:30.136 --rc geninfo_unexecuted_blocks=1 00:25:30.136 00:25:30.136 ' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:30.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.136 --rc genhtml_branch_coverage=1 00:25:30.136 --rc genhtml_function_coverage=1 00:25:30.136 --rc genhtml_legend=1 00:25:30.136 --rc geninfo_all_blocks=1 00:25:30.136 --rc geninfo_unexecuted_blocks=1 00:25:30.136 00:25:30.136 ' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:30.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.136 --rc genhtml_branch_coverage=1 00:25:30.136 --rc genhtml_function_coverage=1 00:25:30.136 --rc genhtml_legend=1 00:25:30.136 --rc geninfo_all_blocks=1 00:25:30.136 --rc geninfo_unexecuted_blocks=1 00:25:30.136 00:25:30.136 ' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:30.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.136 --rc genhtml_branch_coverage=1 00:25:30.136 --rc genhtml_function_coverage=1 00:25:30.136 --rc genhtml_legend=1 00:25:30.136 --rc geninfo_all_blocks=1 00:25:30.136 --rc geninfo_unexecuted_blocks=1 00:25:30.136 00:25:30.136 ' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:30.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:30.136 13:09:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:35.404 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:35.404 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:35.405 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:35.405 Found net devices under 0000:86:00.0: cvl_0_0 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:35.405 Found net devices under 0000:86:00.1: cvl_0_1 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:35.405 13:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:35.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:35.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:25:35.405 00:25:35.405 --- 10.0.0.2 ping statistics --- 00:25:35.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.405 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:35.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:35.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:25:35.405 00:25:35.405 --- 10.0.0.1 ping statistics --- 00:25:35.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.405 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2097880 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2097880 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2097880 ']' 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.405 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:35.405 [2024-11-29 13:09:35.210322] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:25:35.405 [2024-11-29 13:09:35.210369] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.663 [2024-11-29 13:09:35.276876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.663 [2024-11-29 13:09:35.317920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.663 [2024-11-29 13:09:35.317962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.663 [2024-11-29 13:09:35.317969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.663 [2024-11-29 13:09:35.317975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.663 [2024-11-29 13:09:35.317983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.663 [2024-11-29 13:09:35.318557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.663 [2024-11-29 13:09:35.455590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.663 [2024-11-29 13:09:35.463763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.663 null0 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.663 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.921 null1 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2097965 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2097965 /tmp/host.sock 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2097965 ']' 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:35.921 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.921 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:35.922 [2024-11-29 13:09:35.541904] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:25:35.922 [2024-11-29 13:09:35.541944] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2097965 ] 00:25:35.922 [2024-11-29 13:09:35.602875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.922 [2024-11-29 13:09:35.643807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.922 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.922 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:35.922 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:35.922 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:35.922 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.922 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.179 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.179 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:36.179 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.179 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.179 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.179 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:36.180 13:09:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.440 [2024-11-29 13:09:36.053268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:36.440 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:36.441 13:09:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:37.008 [2024-11-29 13:09:36.761780] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:37.008 [2024-11-29 13:09:36.761798] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:37.008 [2024-11-29 13:09:36.761810] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:37.267 [2024-11-29 13:09:36.849079] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:37.267 [2024-11-29 13:09:37.073211] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:37.267 [2024-11-29 13:09:37.074004] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18d3e30:1 started. 00:25:37.267 [2024-11-29 13:09:37.075387] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:37.267 [2024-11-29 13:09:37.075403] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:37.267 [2024-11-29 13:09:37.081053] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18d3e30 was disconnected and freed. delete nvme_qpair. 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.526 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:37.786 [2024-11-29 13:09:37.455702] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18d41b0:1 started. 00:25:37.786 [2024-11-29 13:09:37.462046] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18d41b0 was disconnected and freed. delete nvme_qpair. 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.786 [2024-11-29 13:09:37.549507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:37.786 [2024-11-29 13:09:37.550469] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:37.786 [2024-11-29 13:09:37.550489] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:37.786 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:38.044 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:38.044 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.044 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.044 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.044 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.044 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.044 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.044 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.045 [2024-11-29 13:09:37.637742] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:38.045 13:09:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:38.302 [2024-11-29 13:09:37.939203] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:38.302 [2024-11-29 13:09:37.939237] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:38.302 [2024-11-29 13:09:37.939246] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:38.302 [2024-11-29 13:09:37.939251] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.238 [2024-11-29 13:09:38.801510] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:39.238 [2024-11-29 13:09:38.801531] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:39.238 [2024-11-29 13:09:38.807597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.238 [2024-11-29 13:09:38.807614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.238 [2024-11-29 13:09:38.807622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.238 [2024-11-29 13:09:38.807629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.238 [2024-11-29 13:09:38.807652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.238 [2024-11-29 13:09:38.807659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.238 [2024-11-29 13:09:38.807667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.238 [2024-11-29 13:09:38.807673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.238 [2024-11-29 13:09:38.807679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a4390 is same with the state(6) to be set 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.238 [2024-11-29 13:09:38.817609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a4390 (9): Bad file descriptor 00:25:39.238 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.238 [2024-11-29 13:09:38.827644] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:39.238 [2024-11-29 13:09:38.827656] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:39.238 [2024-11-29 13:09:38.827664] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:39.238 [2024-11-29 13:09:38.827668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:39.238 [2024-11-29 13:09:38.827684] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:39.239 [2024-11-29 13:09:38.827878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.239 [2024-11-29 13:09:38.827892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a4390 with addr=10.0.0.2, port=4420 00:25:39.239 [2024-11-29 13:09:38.827901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a4390 is same with the state(6) to be set 00:25:39.239 [2024-11-29 13:09:38.827913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a4390 (9): Bad file descriptor 00:25:39.239 [2024-11-29 13:09:38.827922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:39.239 [2024-11-29 13:09:38.827929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:39.239 [2024-11-29 13:09:38.827936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:39.239 [2024-11-29 13:09:38.827942] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:39.239 [2024-11-29 13:09:38.827952] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:39.239 [2024-11-29 13:09:38.827957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:39.239 [2024-11-29 13:09:38.837714] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:39.239 [2024-11-29 13:09:38.837724] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:39.239 [2024-11-29 13:09:38.837728] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:39.239 [2024-11-29 13:09:38.837732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:39.239 [2024-11-29 13:09:38.837744] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:39.239 [2024-11-29 13:09:38.838027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.239 [2024-11-29 13:09:38.838039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a4390 with addr=10.0.0.2, port=4420 00:25:39.239 [2024-11-29 13:09:38.838047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a4390 is same with the state(6) to be set 00:25:39.239 [2024-11-29 13:09:38.838058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a4390 (9): Bad file descriptor 00:25:39.239 [2024-11-29 13:09:38.838067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:39.239 [2024-11-29 13:09:38.838073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:39.239 [2024-11-29 13:09:38.838080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:39.239 [2024-11-29 13:09:38.838086] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:39.239 [2024-11-29 13:09:38.838090] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:39.239 [2024-11-29 13:09:38.838094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:39.239 [2024-11-29 13:09:38.847775] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:39.239 [2024-11-29 13:09:38.847792] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:39.239 [2024-11-29 13:09:38.847796] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:39.239 [2024-11-29 13:09:38.847800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:39.239 [2024-11-29 13:09:38.847815] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:39.239 [2024-11-29 13:09:38.848038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.239 [2024-11-29 13:09:38.848052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a4390 with addr=10.0.0.2, port=4420 00:25:39.239 [2024-11-29 13:09:38.848060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a4390 is same with the state(6) to be set 00:25:39.239 [2024-11-29 13:09:38.848071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a4390 (9): Bad file descriptor 00:25:39.239 [2024-11-29 13:09:38.848080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:39.239 [2024-11-29 13:09:38.848086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:39.239 [2024-11-29 13:09:38.848093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:39.239 [2024-11-29 13:09:38.848098] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:39.239 [2024-11-29 13:09:38.848103] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:39.239 [2024-11-29 13:09:38.848107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.239 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.239 [2024-11-29 13:09:38.857845] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:39.239 [2024-11-29 13:09:38.857859] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:39.239 [2024-11-29 13:09:38.857864] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:39.239 [2024-11-29 13:09:38.857869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:39.239 [2024-11-29 13:09:38.857886] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:39.239 [2024-11-29 13:09:38.858157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.239 [2024-11-29 13:09:38.858170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a4390 with addr=10.0.0.2, port=4420 00:25:39.239 [2024-11-29 13:09:38.858179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a4390 is same with the state(6) to be set 00:25:39.239 [2024-11-29 13:09:38.858191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a4390 (9): Bad file descriptor 00:25:39.239 [2024-11-29 13:09:38.858200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:39.239 [2024-11-29 13:09:38.858206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:39.239 [2024-11-29 13:09:38.858213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:39.239 [2024-11-29 13:09:38.858221] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:39.239 [2024-11-29 13:09:38.858227] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:39.239 [2024-11-29 13:09:38.858232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:39.239 [2024-11-29 13:09:38.867918] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:39.239 [2024-11-29 13:09:38.867931] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:39.239 [2024-11-29 13:09:38.867936] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:39.239 [2024-11-29 13:09:38.867940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:39.239 [2024-11-29 13:09:38.867955] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:39.239 [2024-11-29 13:09:38.868138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.239 [2024-11-29 13:09:38.868149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a4390 with addr=10.0.0.2, port=4420 00:25:39.239 [2024-11-29 13:09:38.868157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a4390 is same with the state(6) to be set 00:25:39.240 [2024-11-29 13:09:38.868167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a4390 (9): Bad file descriptor 00:25:39.240 [2024-11-29 13:09:38.868177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:39.240 [2024-11-29 13:09:38.868183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:39.240 [2024-11-29 13:09:38.868190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:39.240 [2024-11-29 13:09:38.868195] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:39.240 [2024-11-29 13:09:38.868200] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:39.240 [2024-11-29 13:09:38.868204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:39.240 [2024-11-29 13:09:38.877986] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:39.240 [2024-11-29 13:09:38.877997] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:39.240 [2024-11-29 13:09:38.878001] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:39.240 [2024-11-29 13:09:38.878008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:39.240 [2024-11-29 13:09:38.878021] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:39.240 [2024-11-29 13:09:38.878195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.240 [2024-11-29 13:09:38.878206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a4390 with addr=10.0.0.2, port=4420 00:25:39.240 [2024-11-29 13:09:38.878214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a4390 is same with the state(6) to be set 00:25:39.240 [2024-11-29 13:09:38.878224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a4390 (9): Bad file descriptor 00:25:39.240 [2024-11-29 13:09:38.878234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:39.240 [2024-11-29 13:09:38.878239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:39.240 [2024-11-29 13:09:38.878246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:39.240 [2024-11-29 13:09:38.878251] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:39.240 [2024-11-29 13:09:38.878256] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:39.240 [2024-11-29 13:09:38.878260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:39.240 [2024-11-29 13:09:38.888053] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:39.240 [2024-11-29 13:09:38.888062] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:39.240 [2024-11-29 13:09:38.888066] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:39.240 [2024-11-29 13:09:38.888070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:39.240 [2024-11-29 13:09:38.888082] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:39.240 [2024-11-29 13:09:38.888238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.240 [2024-11-29 13:09:38.888248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a4390 with addr=10.0.0.2, port=4420 00:25:39.240 [2024-11-29 13:09:38.888255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a4390 is same with the state(6) to be set 00:25:39.240 [2024-11-29 13:09:38.888265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a4390 (9): Bad file descriptor 00:25:39.240 [2024-11-29 13:09:38.888275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:39.240 [2024-11-29 13:09:38.888281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:39.240 [2024-11-29 13:09:38.888287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:39.240 [2024-11-29 13:09:38.888293] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:39.240 [2024-11-29 13:09:38.888297] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:39.240 [2024-11-29 13:09:38.888301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:39.240 [2024-11-29 13:09:38.888784] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:39.240 [2024-11-29 13:09:38.888799] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:39.240 13:09:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.240 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.240 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.240 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:39.240 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:39.240 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.240 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.240 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:39.240 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:39.240 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.240 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.240 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.240 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.241 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.241 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.241 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:39.499 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.500 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.500 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.500 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:39.500 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:39.500 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:39.500 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:39.500 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:39.500 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.500 13:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.434 [2024-11-29 13:09:40.211473] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:40.434 [2024-11-29 13:09:40.211496] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:40.434 [2024-11-29 13:09:40.211508] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:40.692 [2024-11-29 13:09:40.338898] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:40.950 [2024-11-29 13:09:40.645273] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:40.951 [2024-11-29 13:09:40.645854] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x18df330:1 started. 00:25:40.951 [2024-11-29 13:09:40.647551] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:40.951 [2024-11-29 13:09:40.647581] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:40.951 [2024-11-29 13:09:40.650411] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x18df330 was disconnected and freed. delete nvme_qpair. 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.951 request: 00:25:40.951 { 00:25:40.951 "name": "nvme", 00:25:40.951 "trtype": "tcp", 00:25:40.951 "traddr": "10.0.0.2", 00:25:40.951 "adrfam": "ipv4", 00:25:40.951 "trsvcid": "8009", 00:25:40.951 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:40.951 "wait_for_attach": true, 00:25:40.951 "method": "bdev_nvme_start_discovery", 00:25:40.951 "req_id": 1 00:25:40.951 } 00:25:40.951 Got JSON-RPC error response 00:25:40.951 response: 00:25:40.951 { 00:25:40.951 "code": -17, 00:25:40.951 "message": "File exists" 00:25:40.951 } 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.951 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.209 request: 00:25:41.209 { 00:25:41.209 "name": "nvme_second", 00:25:41.209 "trtype": "tcp", 00:25:41.209 "traddr": "10.0.0.2", 00:25:41.209 "adrfam": "ipv4", 00:25:41.209 "trsvcid": "8009", 00:25:41.209 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:41.209 "wait_for_attach": true, 00:25:41.209 "method": "bdev_nvme_start_discovery", 00:25:41.209 "req_id": 1 00:25:41.209 } 00:25:41.209 Got JSON-RPC error response 00:25:41.209 response: 00:25:41.209 { 00:25:41.209 "code": -17, 00:25:41.209 "message": "File exists" 00:25:41.209 } 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.209 13:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.142 [2024-11-29 13:09:41.887215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.142 [2024-11-29 13:09:41.887245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b1200 with addr=10.0.0.2, port=8010 00:25:42.142 [2024-11-29 13:09:41.887261] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:42.142 [2024-11-29 13:09:41.887267] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:42.142 [2024-11-29 13:09:41.887274] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:43.077 [2024-11-29 13:09:42.889573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.077 [2024-11-29 13:09:42.889597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b1200 with addr=10.0.0.2, port=8010 00:25:43.077 [2024-11-29 13:09:42.889609] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:43.077 [2024-11-29 13:09:42.889615] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:43.077 [2024-11-29 13:09:42.889621] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:44.452 [2024-11-29 13:09:43.891797] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:44.452 request: 00:25:44.452 { 00:25:44.452 "name": "nvme_second", 00:25:44.452 "trtype": "tcp", 00:25:44.452 "traddr": "10.0.0.2", 00:25:44.452 "adrfam": "ipv4", 00:25:44.452 "trsvcid": "8010", 00:25:44.452 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:44.452 "wait_for_attach": false, 00:25:44.452 "attach_timeout_ms": 3000, 00:25:44.452 "method": "bdev_nvme_start_discovery", 00:25:44.452 "req_id": 1 00:25:44.453 } 00:25:44.453 Got JSON-RPC error response 00:25:44.453 response: 00:25:44.453 { 00:25:44.453 "code": -110, 00:25:44.453 "message": "Connection timed out" 00:25:44.453 } 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2097965 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.453 13:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.453 rmmod nvme_tcp 00:25:44.453 rmmod nvme_fabrics 00:25:44.453 rmmod nvme_keyring 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2097880 ']' 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2097880 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2097880 ']' 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2097880 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2097880 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2097880' 00:25:44.453 killing process with pid 2097880 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2097880 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2097880 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.453 13:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:46.987 00:25:46.987 real 0m16.585s 00:25:46.987 user 0m20.432s 00:25:46.987 sys 0m5.276s 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.987 ************************************ 00:25:46.987 END TEST nvmf_host_discovery 00:25:46.987 ************************************ 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.987 ************************************ 00:25:46.987 START TEST nvmf_host_multipath_status 00:25:46.987 ************************************ 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:46.987 * Looking for test storage... 00:25:46.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.987 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:46.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.988 --rc genhtml_branch_coverage=1 00:25:46.988 --rc genhtml_function_coverage=1 00:25:46.988 --rc genhtml_legend=1 00:25:46.988 --rc geninfo_all_blocks=1 00:25:46.988 --rc geninfo_unexecuted_blocks=1 00:25:46.988 00:25:46.988 ' 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:46.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.988 --rc genhtml_branch_coverage=1 00:25:46.988 --rc genhtml_function_coverage=1 00:25:46.988 --rc genhtml_legend=1 00:25:46.988 --rc geninfo_all_blocks=1 00:25:46.988 --rc geninfo_unexecuted_blocks=1 00:25:46.988 00:25:46.988 ' 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:46.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.988 --rc genhtml_branch_coverage=1 00:25:46.988 --rc genhtml_function_coverage=1 00:25:46.988 --rc genhtml_legend=1 00:25:46.988 --rc geninfo_all_blocks=1 00:25:46.988 --rc geninfo_unexecuted_blocks=1 00:25:46.988 00:25:46.988 ' 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:46.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.988 --rc genhtml_branch_coverage=1 00:25:46.988 --rc genhtml_function_coverage=1 00:25:46.988 --rc genhtml_legend=1 00:25:46.988 --rc geninfo_all_blocks=1 00:25:46.988 --rc geninfo_unexecuted_blocks=1 00:25:46.988 00:25:46.988 ' 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:46.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:46.988 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:46.989 13:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:52.296 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:52.296 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:52.296 Found net devices under 0000:86:00.0: cvl_0_0 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:52.296 Found net devices under 0000:86:00.1: cvl_0_1 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.296 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.297 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:52.297 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:52.297 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.297 13:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.297 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.297 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.297 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:52.297 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.556 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.556 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.556 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:52.556 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:52.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:25:52.556 00:25:52.556 --- 10.0.0.2 ping statistics --- 00:25:52.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.556 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:25:52.556 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:25:52.556 00:25:52.556 --- 10.0.0.1 ping statistics --- 00:25:52.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.556 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:25:52.556 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.556 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:52.556 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:52.556 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.556 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2102978 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2102978 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2102978 ']' 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:52.557 13:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:52.557 [2024-11-29 13:09:52.261549] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:25:52.557 [2024-11-29 13:09:52.261594] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.557 [2024-11-29 13:09:52.329726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:52.557 [2024-11-29 13:09:52.372301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.557 [2024-11-29 13:09:52.372339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.557 [2024-11-29 13:09:52.372346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.557 [2024-11-29 13:09:52.372352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.557 [2024-11-29 13:09:52.372357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.557 [2024-11-29 13:09:52.373588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.557 [2024-11-29 13:09:52.373592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.495 13:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.495 13:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:53.495 13:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:53.495 13:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:53.495 13:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:53.495 13:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.495 13:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2102978 00:25:53.495 13:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:53.495 [2024-11-29 13:09:53.277663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.495 13:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:53.755 Malloc0 00:25:53.755 13:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:54.015 13:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:54.274 13:09:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.274 [2024-11-29 13:09:54.058981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.274 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:54.533 [2024-11-29 13:09:54.251511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:54.533 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2103296 00:25:54.533 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:54.533 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2103296 /var/tmp/bdevperf.sock 00:25:54.533 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2103296 ']' 00:25:54.533 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:54.533 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:54.533 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.533 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:54.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:54.533 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.533 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:54.794 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.794 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:54.794 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:55.053 13:09:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:55.311 Nvme0n1 00:25:55.311 13:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:55.876 Nvme0n1 00:25:55.876 13:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:55.876 13:09:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:57.779 13:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:57.779 13:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:58.038 13:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:58.297 13:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:59.233 13:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:59.233 13:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:59.233 13:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.234 13:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:59.492 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.492 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:59.492 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.492 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:59.751 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:59.751 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:59.751 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.751 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:59.751 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.751 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:59.751 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.751 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:00.010 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.010 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:00.010 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:00.010 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.269 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.269 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:00.269 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:00.269 13:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.530 13:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.530 13:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:00.530 13:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:00.789 13:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:00.789 13:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:02.167 13:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:02.167 13:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:02.167 13:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.167 13:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:02.167 13:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.167 13:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:02.167 13:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.167 13:10:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:02.426 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.426 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:02.426 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:02.426 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.426 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.426 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:02.426 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.426 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:02.685 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.685 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:02.685 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.685 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:02.944 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.944 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:02.944 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.944 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:03.203 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.203 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:03.203 13:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:03.461 13:10:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:03.461 13:10:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:04.837 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:04.837 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:04.837 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.837 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:04.837 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.837 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:04.837 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.837 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.095 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.095 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.095 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.095 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.095 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.095 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.095 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.095 13:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.356 13:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.356 13:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.356 13:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.356 13:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.614 13:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.614 13:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:05.614 13:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.614 13:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.873 13:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.873 13:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:05.873 13:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:06.131 13:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:06.131 13:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:07.506 13:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:07.506 13:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:07.506 13:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.506 13:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:07.506 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.506 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:07.506 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.506 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.764 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.764 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.764 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.764 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.764 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.764 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.764 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.764 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.022 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.022 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.022 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.022 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.280 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.280 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:08.280 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.280 13:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.538 13:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.538 13:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:08.538 13:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:08.796 13:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:08.796 13:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:10.168 13:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:10.168 13:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:10.168 13:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.168 13:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.168 13:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.168 13:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:10.168 13:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.168 13:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.168 13:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.168 13:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.168 13:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.168 13:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.426 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.426 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.426 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.426 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.684 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.684 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:10.684 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.684 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.942 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.942 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:10.942 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.942 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.942 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.942 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:10.942 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:11.200 13:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:11.458 13:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:12.392 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:12.392 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:12.392 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.392 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.652 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.652 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:12.652 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.652 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:12.910 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.910 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:12.910 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.910 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.192 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.192 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.192 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.192 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.192 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.192 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:13.192 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.192 13:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.451 13:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.451 13:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:13.451 13:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.451 13:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:13.709 13:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.709 13:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:13.967 13:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:13.967 13:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:14.225 13:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:14.225 13:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:15.598 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:15.598 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:15.598 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.598 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.598 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.598 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:15.598 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.598 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.598 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.598 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.856 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.856 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.856 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.856 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.856 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:15.856 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.114 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.114 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:16.114 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.114 13:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.372 13:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.372 13:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:16.372 13:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.372 13:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:16.631 13:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.631 13:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:16.631 13:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:16.889 13:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:16.889 13:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:18.262 13:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:18.262 13:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:18.263 13:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.263 13:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:18.263 13:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.263 13:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:18.263 13:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.263 13:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:18.520 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.520 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:18.520 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.520 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.520 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.520 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.520 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.520 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.777 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.777 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:18.777 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.778 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:19.036 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.036 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:19.036 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.036 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:19.295 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.295 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:19.295 13:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:19.295 13:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:19.553 13:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:20.927 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:20.927 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:20.927 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.927 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.927 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.927 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:20.927 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.927 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.927 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.927 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.927 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.927 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:21.185 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.185 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:21.185 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.185 13:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:21.443 13:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.443 13:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:21.443 13:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.443 13:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.701 13:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.701 13:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:21.701 13:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.701 13:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.960 13:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.960 13:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:21.960 13:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.960 13:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:22.218 13:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:23.593 13:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:23.593 13:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:23.593 13:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.593 13:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:23.593 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.593 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:23.593 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.593 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.593 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.593 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.593 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.593 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:23.852 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.852 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:23.852 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.852 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:24.110 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.110 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:24.110 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:24.110 13:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.368 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.368 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:24.368 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.368 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.626 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.626 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2103296 00:26:24.626 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2103296 ']' 00:26:24.626 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2103296 00:26:24.626 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:24.626 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.626 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2103296 00:26:24.627 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:24.627 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:24.627 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2103296' 00:26:24.627 killing process with pid 2103296 00:26:24.627 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2103296 00:26:24.627 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2103296 00:26:24.627 { 00:26:24.627 "results": [ 00:26:24.627 { 00:26:24.627 "job": "Nvme0n1", 00:26:24.627 "core_mask": "0x4", 00:26:24.627 "workload": "verify", 00:26:24.627 "status": "terminated", 00:26:24.627 "verify_range": { 00:26:24.627 "start": 0, 00:26:24.627 "length": 16384 00:26:24.627 }, 00:26:24.627 "queue_depth": 128, 00:26:24.627 "io_size": 4096, 00:26:24.627 "runtime": 28.640361, 00:26:24.627 "iops": 10048.12753582261, 00:26:24.627 "mibps": 39.25049818680707, 00:26:24.627 "io_failed": 0, 00:26:24.627 "io_timeout": 0, 00:26:24.627 "avg_latency_us": 12717.011256721196, 00:26:24.627 "min_latency_us": 1296.473043478261, 00:26:24.627 "max_latency_us": 3078254.4139130437 00:26:24.627 } 00:26:24.627 ], 00:26:24.627 "core_count": 1 00:26:24.627 } 00:26:24.937 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2103296 00:26:24.937 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:24.937 [2024-11-29 13:09:54.328965] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:26:24.937 [2024-11-29 13:09:54.329016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2103296 ] 00:26:24.937 [2024-11-29 13:09:54.387408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.937 [2024-11-29 13:09:54.428036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:24.937 Running I/O for 90 seconds... 00:26:24.937 10787.00 IOPS, 42.14 MiB/s [2024-11-29T12:10:24.757Z] 10821.50 IOPS, 42.27 MiB/s [2024-11-29T12:10:24.757Z] 10823.33 IOPS, 42.28 MiB/s [2024-11-29T12:10:24.757Z] 10852.00 IOPS, 42.39 MiB/s [2024-11-29T12:10:24.757Z] 10831.00 IOPS, 42.31 MiB/s [2024-11-29T12:10:24.757Z] 10863.33 IOPS, 42.43 MiB/s [2024-11-29T12:10:24.757Z] 10865.00 IOPS, 42.44 MiB/s [2024-11-29T12:10:24.757Z] 10863.25 IOPS, 42.43 MiB/s [2024-11-29T12:10:24.757Z] 10852.44 IOPS, 42.39 MiB/s [2024-11-29T12:10:24.757Z] 10837.90 IOPS, 42.34 MiB/s [2024-11-29T12:10:24.757Z] 10835.82 IOPS, 42.33 MiB/s [2024-11-29T12:10:24.757Z] 10834.00 IOPS, 42.32 MiB/s [2024-11-29T12:10:24.757Z] [2024-11-29 13:10:08.359310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.937 [2024-11-29 13:10:08.359346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.937 [2024-11-29 13:10:08.359368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.937 [2024-11-29 13:10:08.359380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.937 [2024-11-29 13:10:08.359397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.937 [2024-11-29 13:10:08.359405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.937 [2024-11-29 13:10:08.359418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.937 [2024-11-29 13:10:08.359425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.937 [2024-11-29 13:10:08.359438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.937 [2024-11-29 13:10:08.359445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.937 [2024-11-29 13:10:08.359457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.937 [2024-11-29 13:10:08.359464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.937 [2024-11-29 13:10:08.359477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.937 [2024-11-29 13:10:08.359484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.937 [2024-11-29 13:10:08.359496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.937 [2024-11-29 13:10:08.359503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.937 [2024-11-29 13:10:08.359516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.937 [2024-11-29 13:10:08.359523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.359980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.359997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.938 [2024-11-29 13:10:08.360728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.938 [2024-11-29 13:10:08.360740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.360747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.360760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.360771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.360789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.360800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.360815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.360822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.360835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.360841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.360854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.360861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.360873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-11-29 13:10:08.360881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.360894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.939 [2024-11-29 13:10:08.360901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.360914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.360920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.360933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.360940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.360959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.360967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.360980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.360988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.939 [2024-11-29 13:10:08.361513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.939 [2024-11-29 13:10:08.361526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.361533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.361545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.361553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.361565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.361572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.361584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.361591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.940 [2024-11-29 13:10:08.362754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.940 [2024-11-29 13:10:08.362775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.940 [2024-11-29 13:10:08.362795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.940 [2024-11-29 13:10:08.362815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.940 [2024-11-29 13:10:08.362835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.940 [2024-11-29 13:10:08.362854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.940 [2024-11-29 13:10:08.362866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.941 [2024-11-29 13:10:08.362873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.362886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.941 [2024-11-29 13:10:08.362893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.362905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.362911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.362924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.362930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.362944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.362956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.362969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.362976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.362988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.362995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.363992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.363999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.364013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.364020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.364032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.941 [2024-11-29 13:10:08.364039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.941 [2024-11-29 13:10:08.364051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.364058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.364071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.364077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.364090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.364097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.364109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.364116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.364129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.364137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.364149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.364159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.364171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.364178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.364191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.364198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.364210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.364217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.364229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.364236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.364249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.364255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.364268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.364274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.364287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.364294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.942 [2024-11-29 13:10:08.374183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.942 [2024-11-29 13:10:08.374204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.942 [2024-11-29 13:10:08.374479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.942 [2024-11-29 13:10:08.374492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.374984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.374996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.375003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.375015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.375022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.375035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.375042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.375054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.375061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.375073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.375080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.375092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.375099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.375111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.375117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.375131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.375138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.375150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.375157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.375169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.375176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.375188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.375194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.375207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.375213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.375225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.943 [2024-11-29 13:10:08.375233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.943 [2024-11-29 13:10:08.375246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.944 [2024-11-29 13:10:08.375481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.944 [2024-11-29 13:10:08.375499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.944 [2024-11-29 13:10:08.375519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.944 [2024-11-29 13:10:08.375537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.944 [2024-11-29 13:10:08.375557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.944 [2024-11-29 13:10:08.375575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.944 [2024-11-29 13:10:08.375596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.375760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.375767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.376498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.376513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.376528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.376535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.376548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.376557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.376570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.376576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.376589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.376596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.376608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.376615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.376627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.376634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.376646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.376652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.376665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.376672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.376684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.376690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.376703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.944 [2024-11-29 13:10:08.376710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.944 [2024-11-29 13:10:08.376722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.376985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.376998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.945 [2024-11-29 13:10:08.377742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.945 [2024-11-29 13:10:08.377762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.945 [2024-11-29 13:10:08.377832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.945 [2024-11-29 13:10:08.377838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.377851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.377860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.377872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.377879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.377891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.377899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.377911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.377918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.377930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.377937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.377954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.377962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.377975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.377982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.377995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.946 [2024-11-29 13:10:08.378874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.946 [2024-11-29 13:10:08.378886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.378893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.378905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.378912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.378924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.378931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.378943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.378958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.378973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.947 [2024-11-29 13:10:08.384685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.947 [2024-11-29 13:10:08.384706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.947 [2024-11-29 13:10:08.384725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.947 [2024-11-29 13:10:08.384746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.947 [2024-11-29 13:10:08.384765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.947 [2024-11-29 13:10:08.384784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.947 [2024-11-29 13:10:08.384802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.384815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.384822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.385214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.385227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.385242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.385249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.385261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.947 [2024-11-29 13:10:08.385268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.947 [2024-11-29 13:10:08.385280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.385983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.385990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.386002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.386009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.386021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.386027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.386040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.948 [2024-11-29 13:10:08.386046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.948 [2024-11-29 13:10:08.386060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.949 [2024-11-29 13:10:08.386276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.949 [2024-11-29 13:10:08.386297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.949 [2024-11-29 13:10:08.386841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.949 [2024-11-29 13:10:08.386848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.386861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.386869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.386881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.386888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.386900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.386907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.386920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.386928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.386941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.386953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.386966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.386974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.387729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.387747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.387764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.387773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.387787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.387796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.387815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.387822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.387836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.387843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.387855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.387862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.387875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.387882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.387894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.387901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.387913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.387920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.387933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.387940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.387960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.387969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.387982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.387989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.950 [2024-11-29 13:10:08.388323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.950 [2024-11-29 13:10:08.388335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.388342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.388354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.388361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.388373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.388379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.388392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.951 [2024-11-29 13:10:08.388399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.388411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.951 [2024-11-29 13:10:08.388418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.388431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.951 [2024-11-29 13:10:08.388438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.388451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.951 [2024-11-29 13:10:08.388457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.388470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.951 [2024-11-29 13:10:08.388477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.388489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.951 [2024-11-29 13:10:08.388496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.388508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.951 [2024-11-29 13:10:08.388515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.951 [2024-11-29 13:10:08.389728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.951 [2024-11-29 13:10:08.389740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.389747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.389760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.389766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.389779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.389785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.389798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.389805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.389817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.389824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.389836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.389843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.389856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.389863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.389875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.389882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.389894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.389901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.389913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.389922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.389935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.389941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.389959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.389966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.389978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.389985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.389997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.952 [2024-11-29 13:10:08.390253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.952 [2024-11-29 13:10:08.390272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.952 [2024-11-29 13:10:08.390896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.952 [2024-11-29 13:10:08.390909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.390915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.390928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.390935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.390952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.390959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.390972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.390979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.390991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.390998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.391643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.391650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.392003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.392014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.953 [2024-11-29 13:10:08.392029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.953 [2024-11-29 13:10:08.392036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.954 [2024-11-29 13:10:08.392331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.954 [2024-11-29 13:10:08.392351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.954 [2024-11-29 13:10:08.392371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.954 [2024-11-29 13:10:08.392391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.954 [2024-11-29 13:10:08.392410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.954 [2024-11-29 13:10:08.392429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.954 [2024-11-29 13:10:08.392448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.392751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.392758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.393092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.393103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.954 [2024-11-29 13:10:08.393117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.954 [2024-11-29 13:10:08.393124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.393535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.393547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.397085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.397100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.397107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.397120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.397126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.397139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.397146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.397158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.397164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.397179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.397185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.397197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.397204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.397216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.397223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.397236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.397242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.397560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.397572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.397586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.397593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.397606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.397613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.397626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.955 [2024-11-29 13:10:08.397633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.955 [2024-11-29 13:10:08.397645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.956 [2024-11-29 13:10:08.397690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.956 [2024-11-29 13:10:08.397709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.397987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.397999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.956 [2024-11-29 13:10:08.398400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.956 [2024-11-29 13:10:08.398406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.398985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.398992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.399004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.399011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.399024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.957 [2024-11-29 13:10:08.399030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.399042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.957 [2024-11-29 13:10:08.399049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.399062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.957 [2024-11-29 13:10:08.399069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.399081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.957 [2024-11-29 13:10:08.399088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.399101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.957 [2024-11-29 13:10:08.399108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.399120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.957 [2024-11-29 13:10:08.399126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.399139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.957 [2024-11-29 13:10:08.399145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.399158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.957 [2024-11-29 13:10:08.399165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.957 [2024-11-29 13:10:08.399177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.399197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.399216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.399235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.399254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.399273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.399292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.399311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.399330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.399349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.399369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.399388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.399407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.399427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.399436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.958 [2024-11-29 13:10:08.400732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.958 [2024-11-29 13:10:08.400745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.400752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.400764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.400771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.400783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.400791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.400803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.400809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.400822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.400829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.400841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.400847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.400859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.400866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.400879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.400885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.400897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.400904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.400916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.400925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.400939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.400950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.400966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.400972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.400985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.400992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.959 [2024-11-29 13:10:08.401030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.959 [2024-11-29 13:10:08.401049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.959 [2024-11-29 13:10:08.401795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.959 [2024-11-29 13:10:08.401806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.401818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.401825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.401837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.401844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.401856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.401863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.401875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.401882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.401895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.401902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.401914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.401922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.401934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.401941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.401960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.401968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.401980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.401987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.401999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.960 [2024-11-29 13:10:08.402844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.960 [2024-11-29 13:10:08.402851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.402863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.402870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.402882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.402888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.402901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.402907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.402920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.402926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.402939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.402946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.961 [2024-11-29 13:10:08.403267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.961 [2024-11-29 13:10:08.403286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.961 [2024-11-29 13:10:08.403305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.961 [2024-11-29 13:10:08.403326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.961 [2024-11-29 13:10:08.403345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.961 [2024-11-29 13:10:08.403364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.961 [2024-11-29 13:10:08.403383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.403824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.403831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.404098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.404109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.404122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.404129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.961 [2024-11-29 13:10:08.404142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.961 [2024-11-29 13:10:08.404148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.962 [2024-11-29 13:10:08.404924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.962 [2024-11-29 13:10:08.404943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.404983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.404990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.405005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.405012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.405024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.405031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.405043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.405050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.405062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.405068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.405081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.405087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.405100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.962 [2024-11-29 13:10:08.405106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.962 [2024-11-29 13:10:08.405119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.405982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.405989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.406001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.406010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.406022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.406029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.406041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.406048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.406198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.406206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.406220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.406226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.406239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.406246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.406258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.406264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.406277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.406283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.406296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.963 [2024-11-29 13:10:08.406302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.963 [2024-11-29 13:10:08.406315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.406801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.964 [2024-11-29 13:10:08.406821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.964 [2024-11-29 13:10:08.406840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.964 [2024-11-29 13:10:08.406859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.964 [2024-11-29 13:10:08.406878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.964 [2024-11-29 13:10:08.406897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.964 [2024-11-29 13:10:08.406916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.406929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.964 [2024-11-29 13:10:08.406936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.408181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.408192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.408206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.408213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.408226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.408233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.408246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.408258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.408270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.408277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.408290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.408297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.408310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.408317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.408329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.408336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.408349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.408356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.408368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.964 [2024-11-29 13:10:08.408375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.964 [2024-11-29 13:10:08.408388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.408990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.408997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.965 [2024-11-29 13:10:08.409293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.965 [2024-11-29 13:10:08.409306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.966 [2024-11-29 13:10:08.409410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.966 [2024-11-29 13:10:08.409430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.409616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.409622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.966 [2024-11-29 13:10:08.410481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.966 [2024-11-29 13:10:08.410494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.410832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.410839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.967 [2024-11-29 13:10:08.411493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.967 [2024-11-29 13:10:08.411512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.967 [2024-11-29 13:10:08.411531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.967 [2024-11-29 13:10:08.411550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.967 [2024-11-29 13:10:08.411569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.967 [2024-11-29 13:10:08.411588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.967 [2024-11-29 13:10:08.411609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.967 [2024-11-29 13:10:08.411622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.968 [2024-11-29 13:10:08.411629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.411955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.411962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.968 [2024-11-29 13:10:08.412623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.968 [2024-11-29 13:10:08.412630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.412915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.412923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.969 [2024-11-29 13:10:08.413320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.969 [2024-11-29 13:10:08.413339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.969 [2024-11-29 13:10:08.413664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.969 [2024-11-29 13:10:08.413670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.413682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.413689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.413702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.413709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.413939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.413955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.413970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.413977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.413990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.413997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.970 [2024-11-29 13:10:08.414858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.970 [2024-11-29 13:10:08.414865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.414877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.414885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.414897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.414904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.414917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.414924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.415098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.415119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.415138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.415158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.415177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.415199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.971 [2024-11-29 13:10:08.415218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.971 [2024-11-29 13:10:08.415240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.971 [2024-11-29 13:10:08.415262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.971 [2024-11-29 13:10:08.415281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.971 [2024-11-29 13:10:08.415302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.971 [2024-11-29 13:10:08.415325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.971 [2024-11-29 13:10:08.415344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.415363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.415376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.415384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.416983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.416990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.417002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.971 [2024-11-29 13:10:08.417009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.971 [2024-11-29 13:10:08.417021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.972 [2024-11-29 13:10:08.417646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.972 [2024-11-29 13:10:08.417665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.972 [2024-11-29 13:10:08.417761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.972 [2024-11-29 13:10:08.417773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.417780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.417792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.417799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.417811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.417818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.417830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.417836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.417849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.417856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.417869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.417876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.417889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.417896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.973 [2024-11-29 13:10:08.418654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.973 [2024-11-29 13:10:08.418660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.418675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.418682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.418697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.418704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.418719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.418726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.418741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.418748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.418826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.418834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.418851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.418858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.418875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.418884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.418900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.418907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.418923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.418930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.418951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.418959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.418975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.418982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.418999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.974 [2024-11-29 13:10:08.419393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.974 [2024-11-29 13:10:08.419418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.974 [2024-11-29 13:10:08.419443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.974 [2024-11-29 13:10:08.419468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.974 [2024-11-29 13:10:08.419492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.974 [2024-11-29 13:10:08.419518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.974 [2024-11-29 13:10:08.419545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.974 [2024-11-29 13:10:08.419709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.974 [2024-11-29 13:10:08.419716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:08.419733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:08.419740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:08.419757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:08.419764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:08.419781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:08.419788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:08.419806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:08.419813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:08.419875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:08.419885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:08.419905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:08.419912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:08.419931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:08.419938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:08.419960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:08.419968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:08.419987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:08.419994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:08.420012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:08.420019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:08.420038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:08.420045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:08.420064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:08.420071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.975 10604.31 IOPS, 41.42 MiB/s [2024-11-29T12:10:24.795Z] 9846.86 IOPS, 38.46 MiB/s [2024-11-29T12:10:24.795Z] 9190.40 IOPS, 35.90 MiB/s [2024-11-29T12:10:24.795Z] 8756.06 IOPS, 34.20 MiB/s [2024-11-29T12:10:24.795Z] 8878.94 IOPS, 34.68 MiB/s [2024-11-29T12:10:24.795Z] 8980.94 IOPS, 35.08 MiB/s [2024-11-29T12:10:24.795Z] 9170.26 IOPS, 35.82 MiB/s [2024-11-29T12:10:24.795Z] 9360.85 IOPS, 36.57 MiB/s [2024-11-29T12:10:24.795Z] 9522.48 IOPS, 37.20 MiB/s [2024-11-29T12:10:24.795Z] 9587.27 IOPS, 37.45 MiB/s [2024-11-29T12:10:24.795Z] 9633.26 IOPS, 37.63 MiB/s [2024-11-29T12:10:24.795Z] 9709.38 IOPS, 37.93 MiB/s [2024-11-29T12:10:24.795Z] 9837.96 IOPS, 38.43 MiB/s [2024-11-29T12:10:24.795Z] 9957.58 IOPS, 38.90 MiB/s [2024-11-29T12:10:24.795Z] [2024-11-29 13:10:21.967173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.975 [2024-11-29 13:10:21.967218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.975 [2024-11-29 13:10:21.967686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.975 [2024-11-29 13:10:21.967700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.976 [2024-11-29 13:10:21.967863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.967985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.967998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.976 [2024-11-29 13:10:21.968887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.976 [2024-11-29 13:10:21.968908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.976 [2024-11-29 13:10:21.968922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.968929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.968941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.968955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.968968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.968974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.968987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.968994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.969006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.969014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.969026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.969033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.970578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.977 [2024-11-29 13:10:21.970585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.971472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.971489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.971504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.971511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.971524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.971532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.971545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.971553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.971565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.971572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.971585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.971592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.971604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.971611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.971624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.971635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.971647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.977 [2024-11-29 13:10:21.971654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.977 [2024-11-29 13:10:21.971667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.971674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.971686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.971693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.971705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.971712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.971725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.971732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.971744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.971751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.971763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.971770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.971782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.971789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.971802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.971809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.971958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.971968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.971982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.971988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.972009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.972030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.972050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.972070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.972089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.972108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.978 [2024-11-29 13:10:21.972127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.978 [2024-11-29 13:10:21.972147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.978 [2024-11-29 13:10:21.972168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.978 [2024-11-29 13:10:21.972188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.978 [2024-11-29 13:10:21.972207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.978 [2024-11-29 13:10:21.972226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.978 [2024-11-29 13:10:21.972245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.978 [2024-11-29 13:10:21.972267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.978 [2024-11-29 13:10:21.972286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.972305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.972325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.972345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.978 [2024-11-29 13:10:21.972530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.978 [2024-11-29 13:10:21.972550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.978 [2024-11-29 13:10:21.972570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.972589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.972608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.978 [2024-11-29 13:10:21.972621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.978 [2024-11-29 13:10:21.972628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.972640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.972647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.972659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.972670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.972683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.972690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.972702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.972709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.972721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.972728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.972740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.972747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.972759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.972766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.972779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.972785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.972798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.972805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.972818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.972824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.972837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.972844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.972856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.972863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.973219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.973248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.973267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.973287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.973308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.973328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.973347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.973367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.973386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.973406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.973426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.973445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.973465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.973484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.973505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.973525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.973758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.973779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.973799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.979 [2024-11-29 13:10:21.973818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.973838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.973857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.979 [2024-11-29 13:10:21.973870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.979 [2024-11-29 13:10:21.973877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.973890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.973896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.973909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.973915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.973928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.973936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.974322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.974343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.974364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.974384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.974403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.974422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.974442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.974470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.974489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.974509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.974528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.974547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.974568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.974588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.974607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.974626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.974639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.974646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.976265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.976290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.976309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.976329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.976348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.976367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.976388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.976410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.976430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.976450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.976469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.976489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.976508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.976527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.976546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.976565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.980 [2024-11-29 13:10:21.976584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.976606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.976618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.980 [2024-11-29 13:10:21.976625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.980 [2024-11-29 13:10:21.977871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.977888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.977906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.977914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.977927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.977934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.977952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.977960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.977972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.977979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.977991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.977998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.978577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.978596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.978617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.978637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.978659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.978679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.978698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.978717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.978729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.978736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.986660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.986670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.986683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.986690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.986703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.981 [2024-11-29 13:10:21.986709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.986721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.986728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.986741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.986748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.987846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.987863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.987878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.987885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.987897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.981 [2024-11-29 13:10:21.987904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.981 [2024-11-29 13:10:21.987920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.987927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.987940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.987953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.987966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.987973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.987985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.987992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.988011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.988030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.988049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.988068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.988088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.988107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.988126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.988146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.988167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.988186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.988205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.988224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.988243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.988262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.988281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.988300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.988319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.988338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.988357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.988376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.988397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.988416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.988435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.988454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.988467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.988474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.990350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.990367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.990383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.982 [2024-11-29 13:10:21.990390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.990403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.990410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.982 [2024-11-29 13:10:21.990423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.982 [2024-11-29 13:10:21.990430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.990449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.990468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.990488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.990510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.990529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.990549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.990568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.990587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.990606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.990625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.990644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.990664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.990683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.990695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.990703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.991384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.991405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.991467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.991486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.991505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.991524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.991544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.983 [2024-11-29 13:10:21.991700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.983 [2024-11-29 13:10:21.991851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.983 [2024-11-29 13:10:21.991857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.991870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.991877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.991889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.991897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.991910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.991917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.992420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.992616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.992641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.992660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.992680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.992701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.992723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.992746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.992897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.992910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.992917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.994177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.994194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.994210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.984 [2024-11-29 13:10:21.994217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.994230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.994237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.994250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.994257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.994270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.994277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.994289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.994296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.994309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.994316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.994328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.994336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.994348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.994355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.994368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.984 [2024-11-29 13:10:21.994374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.984 [2024-11-29 13:10:21.994387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.994397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.994436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.994455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.994668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.994726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.994746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.994765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.994784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.994803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.994882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.994901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.994920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.994933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.985 [2024-11-29 13:10:21.994939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.997060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.997078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.997093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.997101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.997114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.997121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.997133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.997141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.997153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.997160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.997173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.997179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.997192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.997199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.997211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.997218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.997233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.997241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.997253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.985 [2024-11-29 13:10:21.997260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.985 [2024-11-29 13:10:21.997272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.986 [2024-11-29 13:10:21.997279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.986 [2024-11-29 13:10:21.997299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.986 [2024-11-29 13:10:21.997414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.986 [2024-11-29 13:10:21.997532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.986 [2024-11-29 13:10:21.997668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.986 [2024-11-29 13:10:21.997938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.986 [2024-11-29 13:10:21.997967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.997986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.997998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.986 [2024-11-29 13:10:21.998005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.998018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.986 [2024-11-29 13:10:21.998025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.998038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.998044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.998057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.998064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.998076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.998083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.998095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.998102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.998114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.998121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.998134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.998140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.998153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.986 [2024-11-29 13:10:21.998160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.986 [2024-11-29 13:10:21.998173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.986 [2024-11-29 13:10:21.998180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.998199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.998221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:21.998240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:21.998260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:21.998279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:21.998298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.998317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.998337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.998356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.998376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:21.998395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:21.998414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.998433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.998454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.998467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.998474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.999163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.999184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.999203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.999223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.999242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.999261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.999280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.999300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.999319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.999338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.999360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:21.999379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:21.999399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.999418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:21.999437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:21.999457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.987 [2024-11-29 13:10:21.999476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:21.999495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:21.999515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:21.999528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:21.999535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:22.001315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:22.001333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:22.001348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:22.001355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:22.001367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:22.001374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:22.001389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.987 [2024-11-29 13:10:22.001401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.987 [2024-11-29 13:10:22.001413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.001458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.001478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.001575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.001635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.001654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.001692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.001711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.001769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.001788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.001807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.001827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.001845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.001887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.001964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.001971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.002348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.002359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.002373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.002380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.002393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.002399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.002412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.002419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.002431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.002438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.002450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.002457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.002469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.002478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.002491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.002498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.002510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.988 [2024-11-29 13:10:22.002517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.002529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.988 [2024-11-29 13:10:22.002536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.988 [2024-11-29 13:10:22.002548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.002555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.002568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.002575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.002587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.002594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.002606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.002613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.002626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.002633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.002645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.002652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.002664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.002671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.002683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.002690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.002703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.002710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.002726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.002733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.002745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.002752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.002765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.002772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.002784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.002791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.004231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.004330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.004410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.004449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.004487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.004507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.004602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.004623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.004642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.004662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.989 [2024-11-29 13:10:22.004700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.989 [2024-11-29 13:10:22.004719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.989 [2024-11-29 13:10:22.004732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.004739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.004751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.004758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.004770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.004777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.004789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.004796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.004808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.004815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.004828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.004835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.004848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.004856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.006466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.006488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.006507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.006764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.006783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.006803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.006822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.006841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.006861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.006899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.006931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.006940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.007954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.007970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.007985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.007992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.008005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.008012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.008024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.008031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.008044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.008051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.008063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.990 [2024-11-29 13:10:22.008070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.008082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.990 [2024-11-29 13:10:22.008097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.990 [2024-11-29 13:10:22.008110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.008117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.008255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.008274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.008294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.008313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.008332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.008351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.008370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.008389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.008409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.008428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.008978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.008992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.008999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.009011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.009018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.009030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.009037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.009052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.009059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.009071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.009078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.009091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.991 [2024-11-29 13:10:22.009098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.009110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.009117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.009129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.991 [2024-11-29 13:10:22.009136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.991 [2024-11-29 13:10:22.009149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.009156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.009168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.009175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.009187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.009194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.010106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.010123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.010137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.010145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.010158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.010165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.010178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.010185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.010197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.010208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.010220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.010227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.010240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.010247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.010259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.010266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.010278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.010285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.010297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.010305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.010317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.010324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.011336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.011440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.011478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.011556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.011575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.011594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.011614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.011633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.011749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.011769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.992 [2024-11-29 13:10:22.011788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.992 [2024-11-29 13:10:22.011807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.992 [2024-11-29 13:10:22.011819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.011826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.011838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.011845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.011858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.011865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.011877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.011884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.011896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.011903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.011917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.011924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.011937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.011943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.011961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.011968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.011980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.011987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.012000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.012007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.012019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.012026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.012038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.012045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.012057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.012065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.012077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.012084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.012096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.012103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.012115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.012122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.012134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.012141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.012153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.012162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.013630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.013647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.013662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.013670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.013682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.013690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.013702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.013709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.013722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.013729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.013741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.013748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.013760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.013767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.013779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.013786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.013798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.013805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.013817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.013824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.013837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.013844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.014626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.014641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.014660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.014667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.014679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.014686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.014699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.014706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.014718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.014725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.014737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.014744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.014756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.014763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.014775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.014782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.014794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.993 [2024-11-29 13:10:22.014801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.014813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.993 [2024-11-29 13:10:22.014820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.993 [2024-11-29 13:10:22.014833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.014840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.014852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.014859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.014871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.014878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.014894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.014901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.014914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.014920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.014933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.014940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.014958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.014965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.014978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.014984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.014997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.015042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.015061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.015081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.015119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.015159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.015178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.015695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.015716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.015736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.015755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.015775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.994 [2024-11-29 13:10:22.015976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.015988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.015995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.016007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.994 [2024-11-29 13:10:22.016014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.994 [2024-11-29 13:10:22.016026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.016033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.995 [2024-11-29 13:10:22.017918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.995 [2024-11-29 13:10:22.017955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.995 [2024-11-29 13:10:22.017962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.017982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.017989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.018002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.018009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.018021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.018028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.018040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.018049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.018062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.018068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.018081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.018087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.018100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.018106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.018119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.018125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.018138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.018145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.019246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.019268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.019288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.019307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.019464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.019483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.019502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.019598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.019619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.019638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.019696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.019715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.019728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.019735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.020215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.020229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.020244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.020252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.020264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.996 [2024-11-29 13:10:22.020271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.996 [2024-11-29 13:10:22.020283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.996 [2024-11-29 13:10:22.020290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.020303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.020310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.020322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.020332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.020345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.020352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.020364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.020371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.020383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.020390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.997 [2024-11-29 13:10:22.022874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.022988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.022995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.023007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.023014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.997 [2024-11-29 13:10:22.023026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.997 [2024-11-29 13:10:22.023034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.023054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.023073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.023092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.023111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.023130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.023149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.023168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.023187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.023206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.023712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.023734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.023763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.023782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.023802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.023821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.023840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.023860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.023879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.023898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.023911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.023918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.025341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.025363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.025383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.025402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.025424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.025443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.025463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.025482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.025502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.025521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.025540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.025559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.025579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.025598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.025617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.998 [2024-11-29 13:10:22.025637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.025660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.025679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:24.998 [2024-11-29 13:10:22.025691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.998 [2024-11-29 13:10:22.025698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.025717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.025737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.025756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.025775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.025794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.025814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.025833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.025852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.025872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.025893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.025912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.025932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.025957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.025970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.025976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.026836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.026851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.026867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.026874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.026886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.026893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.026906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.026913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.026925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.026932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.026945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.026957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.026970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.026976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.026989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.026999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.027018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.027037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.027057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.027076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.027095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.027114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.027134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.027153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.027172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.027191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.999 [2024-11-29 13:10:22.027210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.027229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.027251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.027270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.027290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.027309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.999 [2024-11-29 13:10:22.027321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.999 [2024-11-29 13:10:22.027328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.027340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.027347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.027360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.027367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.027810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.027823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.027837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.027844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.027857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.027863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.027876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.027883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.027896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.027902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.027915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.027924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.027937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.027944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.027964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.027972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.027984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.027991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.028003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.028010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.028022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.028029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.028042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.028049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.028061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.028068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.028081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.028088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.028100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.028107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.028120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.028128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.028141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.028148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.028160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.028169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.028181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.028188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.028201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.028208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.029271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.029288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.029303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.029310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.029323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.029330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.029343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.029349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.029362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.029368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.029381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.029388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.029401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.029408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.029420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.029427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.029439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.029446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.029459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.029466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.029481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.000 [2024-11-29 13:10:22.029488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.029500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.000 [2024-11-29 13:10:22.029507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.000 [2024-11-29 13:10:22.029520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.029585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.029604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.029623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.029643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.029856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.029875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.029913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.029960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.029973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.029980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.031269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.031301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.031321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.031340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.031359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.031378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.031398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.031417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.031436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.031455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.031477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.031497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.001 [2024-11-29 13:10:22.031516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.031536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.001 [2024-11-29 13:10:22.031555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:25.001 [2024-11-29 13:10:22.031567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.002 [2024-11-29 13:10:22.031574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:25.002 10005.85 IOPS, 39.09 MiB/s [2024-11-29T12:10:24.822Z] 10028.68 IOPS, 39.17 MiB/s [2024-11-29T12:10:24.822Z] Received shutdown signal, test time was about 28.641041 seconds 00:26:25.002 00:26:25.002 Latency(us) 00:26:25.002 [2024-11-29T12:10:24.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.002 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:25.002 Verification LBA range: start 0x0 length 0x4000 00:26:25.002 Nvme0n1 : 28.64 10048.13 39.25 0.00 0.00 12717.01 1296.47 3078254.41 00:26:25.002 [2024-11-29T12:10:24.822Z] =================================================================================================================== 00:26:25.002 [2024-11-29T12:10:24.822Z] Total : 10048.13 39.25 0.00 0.00 12717.01 1296.47 3078254.41 00:26:25.002 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:25.002 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:25.002 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:25.002 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:25.002 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:25.002 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:25.002 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:25.002 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:25.002 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:25.002 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:25.002 rmmod nvme_tcp 00:26:25.002 rmmod nvme_fabrics 00:26:25.384 rmmod nvme_keyring 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2102978 ']' 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2102978 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2102978 ']' 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2102978 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2102978 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2102978' 00:26:25.384 killing process with pid 2102978 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2102978 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2102978 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:25.384 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:25.385 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:25.385 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:25.385 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:25.385 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.385 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.385 13:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.374 13:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:27.374 00:26:27.374 real 0m40.685s 00:26:27.374 user 1m50.499s 00:26:27.374 sys 0m11.244s 00:26:27.374 13:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.374 13:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:27.374 ************************************ 00:26:27.374 END TEST nvmf_host_multipath_status 00:26:27.374 ************************************ 00:26:27.374 13:10:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:27.374 13:10:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:27.374 13:10:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:27.374 13:10:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.374 ************************************ 00:26:27.374 START TEST nvmf_discovery_remove_ifc 00:26:27.374 ************************************ 00:26:27.374 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:27.633 * Looking for test storage... 00:26:27.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:27.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.633 --rc genhtml_branch_coverage=1 00:26:27.633 --rc genhtml_function_coverage=1 00:26:27.633 --rc genhtml_legend=1 00:26:27.633 --rc geninfo_all_blocks=1 00:26:27.633 --rc geninfo_unexecuted_blocks=1 00:26:27.633 00:26:27.633 ' 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:27.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.633 --rc genhtml_branch_coverage=1 00:26:27.633 --rc genhtml_function_coverage=1 00:26:27.633 --rc genhtml_legend=1 00:26:27.633 --rc geninfo_all_blocks=1 00:26:27.633 --rc geninfo_unexecuted_blocks=1 00:26:27.633 00:26:27.633 ' 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:27.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.633 --rc genhtml_branch_coverage=1 00:26:27.633 --rc genhtml_function_coverage=1 00:26:27.633 --rc genhtml_legend=1 00:26:27.633 --rc geninfo_all_blocks=1 00:26:27.633 --rc geninfo_unexecuted_blocks=1 00:26:27.633 00:26:27.633 ' 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:27.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.633 --rc genhtml_branch_coverage=1 00:26:27.633 --rc genhtml_function_coverage=1 00:26:27.633 --rc genhtml_legend=1 00:26:27.633 --rc geninfo_all_blocks=1 00:26:27.633 --rc geninfo_unexecuted_blocks=1 00:26:27.633 00:26:27.633 ' 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.633 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:27.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:27.634 13:10:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:34.197 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:34.197 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:34.197 Found net devices under 0000:86:00.0: cvl_0_0 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:34.197 Found net devices under 0000:86:00.1: cvl_0_1 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:34.197 13:10:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:34.197 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:34.197 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:34.197 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:34.197 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:34.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:26:34.197 00:26:34.197 --- 10.0.0.2 ping statistics --- 00:26:34.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.198 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:34.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:26:34.198 00:26:34.198 --- 10.0.0.1 ping statistics --- 00:26:34.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.198 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2111972 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2111972 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2111972 ']' 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.198 [2024-11-29 13:10:33.158099] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:26:34.198 [2024-11-29 13:10:33.158146] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.198 [2024-11-29 13:10:33.226437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.198 [2024-11-29 13:10:33.269152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.198 [2024-11-29 13:10:33.269187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.198 [2024-11-29 13:10:33.269197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.198 [2024-11-29 13:10:33.269203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.198 [2024-11-29 13:10:33.269208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.198 [2024-11-29 13:10:33.269778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.198 [2024-11-29 13:10:33.409361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.198 [2024-11-29 13:10:33.417554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:34.198 null0 00:26:34.198 [2024-11-29 13:10:33.449527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2112089 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2112089 /tmp/host.sock 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2112089 ']' 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:34.198 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.198 [2024-11-29 13:10:33.505810] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:26:34.198 [2024-11-29 13:10:33.505852] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112089 ] 00:26:34.198 [2024-11-29 13:10:33.563277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.198 [2024-11-29 13:10:33.607075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.198 13:10:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.134 [2024-11-29 13:10:34.765851] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:35.134 [2024-11-29 13:10:34.765873] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:35.134 [2024-11-29 13:10:34.765888] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:35.134 [2024-11-29 13:10:34.853150] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:35.393 [2024-11-29 13:10:35.036193] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:35.393 [2024-11-29 13:10:35.036986] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d40a50:1 started. 00:26:35.393 [2024-11-29 13:10:35.038326] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:35.393 [2024-11-29 13:10:35.038365] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:35.393 [2024-11-29 13:10:35.038383] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:35.393 [2024-11-29 13:10:35.038396] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:35.393 [2024-11-29 13:10:35.038417] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.393 [2024-11-29 13:10:35.044859] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d40a50 was disconnected and freed. delete nvme_qpair. 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.393 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.650 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:35.650 13:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:36.583 13:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.583 13:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.583 13:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.583 13:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.583 13:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.584 13:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.584 13:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.584 13:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.584 13:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:36.584 13:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:37.549 13:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.549 13:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.549 13:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.549 13:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.549 13:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.549 13:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.549 13:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.549 13:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.549 13:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.549 13:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.921 13:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.921 13:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.921 13:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.921 13:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.921 13:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.921 13:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.921 13:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.921 13:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.921 13:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:38.921 13:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.853 13:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.853 13:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.853 13:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.853 13:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.853 13:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.853 13:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.853 13:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.853 13:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.853 13:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:39.853 13:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.785 13:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.785 13:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.786 13:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.786 13:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.786 13:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.786 13:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.786 13:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.786 13:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.786 [2024-11-29 13:10:40.480025] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:40.786 [2024-11-29 13:10:40.480067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.786 [2024-11-29 13:10:40.480095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-11-29 13:10:40.480105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.786 [2024-11-29 13:10:40.480112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-11-29 13:10:40.480120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.786 [2024-11-29 13:10:40.480127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-11-29 13:10:40.480135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.786 [2024-11-29 13:10:40.480141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-11-29 13:10:40.480149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.786 [2024-11-29 13:10:40.480156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.786 [2024-11-29 13:10:40.480162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1d240 is same with the state(6) to be set 00:26:40.786 [2024-11-29 13:10:40.490048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1d240 (9): Bad file descriptor 00:26:40.786 13:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:40.786 13:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.786 [2024-11-29 13:10:40.500083] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:40.786 [2024-11-29 13:10:40.500096] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:40.786 [2024-11-29 13:10:40.500101] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:40.786 [2024-11-29 13:10:40.500106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:40.786 [2024-11-29 13:10:40.500128] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:41.717 13:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.718 13:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.718 13:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.718 13:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.718 13:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.718 13:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.718 13:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.718 [2024-11-29 13:10:41.523959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:41.718 [2024-11-29 13:10:41.523995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d1d240 with addr=10.0.0.2, port=4420 00:26:41.718 [2024-11-29 13:10:41.524006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1d240 is same with the state(6) to be set 00:26:41.718 [2024-11-29 13:10:41.524026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1d240 (9): Bad file descriptor 00:26:41.718 [2024-11-29 13:10:41.524324] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:41.718 [2024-11-29 13:10:41.524344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:41.718 [2024-11-29 13:10:41.524352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:41.718 [2024-11-29 13:10:41.524361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:41.718 [2024-11-29 13:10:41.524367] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:41.718 [2024-11-29 13:10:41.524373] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:41.718 [2024-11-29 13:10:41.524377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:41.718 [2024-11-29 13:10:41.524384] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:41.718 [2024-11-29 13:10:41.524389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:41.718 13:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.975 13:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:41.975 13:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.908 [2024-11-29 13:10:42.526861] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:42.908 [2024-11-29 13:10:42.526881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:42.908 [2024-11-29 13:10:42.526891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:42.908 [2024-11-29 13:10:42.526898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:42.908 [2024-11-29 13:10:42.526904] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:42.908 [2024-11-29 13:10:42.526910] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:42.908 [2024-11-29 13:10:42.526915] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:42.908 [2024-11-29 13:10:42.526919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:42.908 [2024-11-29 13:10:42.526939] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:42.908 [2024-11-29 13:10:42.526962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.908 [2024-11-29 13:10:42.526971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.908 [2024-11-29 13:10:42.526980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.908 [2024-11-29 13:10:42.526986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.908 [2024-11-29 13:10:42.526994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.908 [2024-11-29 13:10:42.527004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.908 [2024-11-29 13:10:42.527012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.908 [2024-11-29 13:10:42.527019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.908 [2024-11-29 13:10:42.527026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:42.908 [2024-11-29 13:10:42.527033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:42.908 [2024-11-29 13:10:42.527040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:42.908 [2024-11-29 13:10:42.527292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d0c910 (9): Bad file descriptor 00:26:42.908 [2024-11-29 13:10:42.528304] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:42.908 [2024-11-29 13:10:42.528317] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:42.908 13:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:44.282 13:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.282 13:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.282 13:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.282 13:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.282 13:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.282 13:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.282 13:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.282 13:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.282 13:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:44.282 13:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:44.846 [2024-11-29 13:10:44.581018] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:44.846 [2024-11-29 13:10:44.581034] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:44.846 [2024-11-29 13:10:44.581049] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:45.103 [2024-11-29 13:10:44.668315] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:45.103 13:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.103 13:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.103 13:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.103 13:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.103 13:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.103 13:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.103 13:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.103 13:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.103 13:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:45.103 13:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.103 [2024-11-29 13:10:44.893467] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:45.103 [2024-11-29 13:10:44.894114] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1d4a4a0:1 started. 00:26:45.103 [2024-11-29 13:10:44.895157] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:45.103 [2024-11-29 13:10:44.895187] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:45.103 [2024-11-29 13:10:44.895203] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:45.103 [2024-11-29 13:10:44.895216] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:45.103 [2024-11-29 13:10:44.895223] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:45.103 [2024-11-29 13:10:44.900168] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1d4a4a0 was disconnected and freed. delete nvme_qpair. 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2112089 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2112089 ']' 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2112089 00:26:46.042 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:46.300 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.300 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2112089 00:26:46.300 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:46.300 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:46.300 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2112089' 00:26:46.300 killing process with pid 2112089 00:26:46.300 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2112089 00:26:46.300 13:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2112089 00:26:46.300 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:46.300 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:46.300 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:46.300 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:46.300 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:46.300 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:46.300 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:46.300 rmmod nvme_tcp 00:26:46.300 rmmod nvme_fabrics 00:26:46.300 rmmod nvme_keyring 00:26:46.300 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:46.300 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:46.300 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:46.300 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2111972 ']' 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2111972 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2111972 ']' 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2111972 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2111972 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2111972' 00:26:46.559 killing process with pid 2111972 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2111972 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2111972 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.559 13:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:49.093 00:26:49.093 real 0m21.287s 00:26:49.093 user 0m26.636s 00:26:49.093 sys 0m5.681s 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.093 ************************************ 00:26:49.093 END TEST nvmf_discovery_remove_ifc 00:26:49.093 ************************************ 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.093 ************************************ 00:26:49.093 START TEST nvmf_identify_kernel_target 00:26:49.093 ************************************ 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:49.093 * Looking for test storage... 00:26:49.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:49.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.093 --rc genhtml_branch_coverage=1 00:26:49.093 --rc genhtml_function_coverage=1 00:26:49.093 --rc genhtml_legend=1 00:26:49.093 --rc geninfo_all_blocks=1 00:26:49.093 --rc geninfo_unexecuted_blocks=1 00:26:49.093 00:26:49.093 ' 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:49.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.093 --rc genhtml_branch_coverage=1 00:26:49.093 --rc genhtml_function_coverage=1 00:26:49.093 --rc genhtml_legend=1 00:26:49.093 --rc geninfo_all_blocks=1 00:26:49.093 --rc geninfo_unexecuted_blocks=1 00:26:49.093 00:26:49.093 ' 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:49.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.093 --rc genhtml_branch_coverage=1 00:26:49.093 --rc genhtml_function_coverage=1 00:26:49.093 --rc genhtml_legend=1 00:26:49.093 --rc geninfo_all_blocks=1 00:26:49.093 --rc geninfo_unexecuted_blocks=1 00:26:49.093 00:26:49.093 ' 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:49.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.093 --rc genhtml_branch_coverage=1 00:26:49.093 --rc genhtml_function_coverage=1 00:26:49.093 --rc genhtml_legend=1 00:26:49.093 --rc geninfo_all_blocks=1 00:26:49.093 --rc geninfo_unexecuted_blocks=1 00:26:49.093 00:26:49.093 ' 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.093 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:49.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:49.094 13:10:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:54.364 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.364 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:54.365 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:54.365 Found net devices under 0000:86:00.0: cvl_0_0 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:54.365 Found net devices under 0000:86:00.1: cvl_0_1 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:54.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:26:54.365 00:26:54.365 --- 10.0.0.2 ping statistics --- 00:26:54.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.365 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:26:54.365 00:26:54.365 --- 10.0.0.1 ping statistics --- 00:26:54.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.365 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:54.365 13:10:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:54.365 13:10:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:56.896 Waiting for block devices as requested 00:26:56.896 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:56.896 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:57.154 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:57.154 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:57.154 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:57.154 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:57.412 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:57.412 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:57.412 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:57.669 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:57.669 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:57.669 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:57.669 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:57.927 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:57.927 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:57.927 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:58.186 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:58.186 No valid GPT data, bailing 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:58.186 00:26:58.186 Discovery Log Number of Records 2, Generation counter 2 00:26:58.186 =====Discovery Log Entry 0====== 00:26:58.186 trtype: tcp 00:26:58.186 adrfam: ipv4 00:26:58.186 subtype: current discovery subsystem 00:26:58.186 treq: not specified, sq flow control disable supported 00:26:58.186 portid: 1 00:26:58.186 trsvcid: 4420 00:26:58.186 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:58.186 traddr: 10.0.0.1 00:26:58.186 eflags: none 00:26:58.186 sectype: none 00:26:58.186 =====Discovery Log Entry 1====== 00:26:58.186 trtype: tcp 00:26:58.186 adrfam: ipv4 00:26:58.186 subtype: nvme subsystem 00:26:58.186 treq: not specified, sq flow control disable supported 00:26:58.186 portid: 1 00:26:58.186 trsvcid: 4420 00:26:58.186 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:58.186 traddr: 10.0.0.1 00:26:58.186 eflags: none 00:26:58.186 sectype: none 00:26:58.186 13:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:58.186 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:58.447 ===================================================== 00:26:58.447 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:58.447 ===================================================== 00:26:58.447 Controller Capabilities/Features 00:26:58.447 ================================ 00:26:58.447 Vendor ID: 0000 00:26:58.447 Subsystem Vendor ID: 0000 00:26:58.447 Serial Number: 16d7587c53fa3dd3f94f 00:26:58.447 Model Number: Linux 00:26:58.447 Firmware Version: 6.8.9-20 00:26:58.447 Recommended Arb Burst: 0 00:26:58.447 IEEE OUI Identifier: 00 00 00 00:26:58.447 Multi-path I/O 00:26:58.447 May have multiple subsystem ports: No 00:26:58.447 May have multiple controllers: No 00:26:58.447 Associated with SR-IOV VF: No 00:26:58.447 Max Data Transfer Size: Unlimited 00:26:58.447 Max Number of Namespaces: 0 00:26:58.447 Max Number of I/O Queues: 1024 00:26:58.447 NVMe Specification Version (VS): 1.3 00:26:58.447 NVMe Specification Version (Identify): 1.3 00:26:58.447 Maximum Queue Entries: 1024 00:26:58.447 Contiguous Queues Required: No 00:26:58.447 Arbitration Mechanisms Supported 00:26:58.447 Weighted Round Robin: Not Supported 00:26:58.447 Vendor Specific: Not Supported 00:26:58.447 Reset Timeout: 7500 ms 00:26:58.447 Doorbell Stride: 4 bytes 00:26:58.447 NVM Subsystem Reset: Not Supported 00:26:58.447 Command Sets Supported 00:26:58.447 NVM Command Set: Supported 00:26:58.447 Boot Partition: Not Supported 00:26:58.447 Memory Page Size Minimum: 4096 bytes 00:26:58.447 Memory Page Size Maximum: 4096 bytes 00:26:58.447 Persistent Memory Region: Not Supported 00:26:58.447 Optional Asynchronous Events Supported 00:26:58.447 Namespace Attribute Notices: Not Supported 00:26:58.447 Firmware Activation Notices: Not Supported 00:26:58.447 ANA Change Notices: Not Supported 00:26:58.447 PLE Aggregate Log Change Notices: Not Supported 00:26:58.447 LBA Status Info Alert Notices: Not Supported 00:26:58.447 EGE Aggregate Log Change Notices: Not Supported 00:26:58.447 Normal NVM Subsystem Shutdown event: Not Supported 00:26:58.447 Zone Descriptor Change Notices: Not Supported 00:26:58.447 Discovery Log Change Notices: Supported 00:26:58.447 Controller Attributes 00:26:58.447 128-bit Host Identifier: Not Supported 00:26:58.447 Non-Operational Permissive Mode: Not Supported 00:26:58.447 NVM Sets: Not Supported 00:26:58.447 Read Recovery Levels: Not Supported 00:26:58.447 Endurance Groups: Not Supported 00:26:58.447 Predictable Latency Mode: Not Supported 00:26:58.447 Traffic Based Keep ALive: Not Supported 00:26:58.447 Namespace Granularity: Not Supported 00:26:58.447 SQ Associations: Not Supported 00:26:58.447 UUID List: Not Supported 00:26:58.447 Multi-Domain Subsystem: Not Supported 00:26:58.447 Fixed Capacity Management: Not Supported 00:26:58.447 Variable Capacity Management: Not Supported 00:26:58.447 Delete Endurance Group: Not Supported 00:26:58.447 Delete NVM Set: Not Supported 00:26:58.447 Extended LBA Formats Supported: Not Supported 00:26:58.447 Flexible Data Placement Supported: Not Supported 00:26:58.447 00:26:58.447 Controller Memory Buffer Support 00:26:58.447 ================================ 00:26:58.447 Supported: No 00:26:58.447 00:26:58.447 Persistent Memory Region Support 00:26:58.447 ================================ 00:26:58.447 Supported: No 00:26:58.447 00:26:58.447 Admin Command Set Attributes 00:26:58.447 ============================ 00:26:58.447 Security Send/Receive: Not Supported 00:26:58.447 Format NVM: Not Supported 00:26:58.447 Firmware Activate/Download: Not Supported 00:26:58.447 Namespace Management: Not Supported 00:26:58.447 Device Self-Test: Not Supported 00:26:58.447 Directives: Not Supported 00:26:58.447 NVMe-MI: Not Supported 00:26:58.447 Virtualization Management: Not Supported 00:26:58.447 Doorbell Buffer Config: Not Supported 00:26:58.447 Get LBA Status Capability: Not Supported 00:26:58.447 Command & Feature Lockdown Capability: Not Supported 00:26:58.447 Abort Command Limit: 1 00:26:58.447 Async Event Request Limit: 1 00:26:58.447 Number of Firmware Slots: N/A 00:26:58.447 Firmware Slot 1 Read-Only: N/A 00:26:58.447 Firmware Activation Without Reset: N/A 00:26:58.447 Multiple Update Detection Support: N/A 00:26:58.447 Firmware Update Granularity: No Information Provided 00:26:58.447 Per-Namespace SMART Log: No 00:26:58.447 Asymmetric Namespace Access Log Page: Not Supported 00:26:58.447 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:58.447 Command Effects Log Page: Not Supported 00:26:58.447 Get Log Page Extended Data: Supported 00:26:58.447 Telemetry Log Pages: Not Supported 00:26:58.447 Persistent Event Log Pages: Not Supported 00:26:58.447 Supported Log Pages Log Page: May Support 00:26:58.447 Commands Supported & Effects Log Page: Not Supported 00:26:58.447 Feature Identifiers & Effects Log Page:May Support 00:26:58.447 NVMe-MI Commands & Effects Log Page: May Support 00:26:58.447 Data Area 4 for Telemetry Log: Not Supported 00:26:58.447 Error Log Page Entries Supported: 1 00:26:58.447 Keep Alive: Not Supported 00:26:58.447 00:26:58.447 NVM Command Set Attributes 00:26:58.447 ========================== 00:26:58.447 Submission Queue Entry Size 00:26:58.447 Max: 1 00:26:58.447 Min: 1 00:26:58.447 Completion Queue Entry Size 00:26:58.447 Max: 1 00:26:58.447 Min: 1 00:26:58.447 Number of Namespaces: 0 00:26:58.447 Compare Command: Not Supported 00:26:58.447 Write Uncorrectable Command: Not Supported 00:26:58.447 Dataset Management Command: Not Supported 00:26:58.447 Write Zeroes Command: Not Supported 00:26:58.447 Set Features Save Field: Not Supported 00:26:58.447 Reservations: Not Supported 00:26:58.447 Timestamp: Not Supported 00:26:58.447 Copy: Not Supported 00:26:58.447 Volatile Write Cache: Not Present 00:26:58.447 Atomic Write Unit (Normal): 1 00:26:58.447 Atomic Write Unit (PFail): 1 00:26:58.447 Atomic Compare & Write Unit: 1 00:26:58.448 Fused Compare & Write: Not Supported 00:26:58.448 Scatter-Gather List 00:26:58.448 SGL Command Set: Supported 00:26:58.448 SGL Keyed: Not Supported 00:26:58.448 SGL Bit Bucket Descriptor: Not Supported 00:26:58.448 SGL Metadata Pointer: Not Supported 00:26:58.448 Oversized SGL: Not Supported 00:26:58.448 SGL Metadata Address: Not Supported 00:26:58.448 SGL Offset: Supported 00:26:58.448 Transport SGL Data Block: Not Supported 00:26:58.448 Replay Protected Memory Block: Not Supported 00:26:58.448 00:26:58.448 Firmware Slot Information 00:26:58.448 ========================= 00:26:58.448 Active slot: 0 00:26:58.448 00:26:58.448 00:26:58.448 Error Log 00:26:58.448 ========= 00:26:58.448 00:26:58.448 Active Namespaces 00:26:58.448 ================= 00:26:58.448 Discovery Log Page 00:26:58.448 ================== 00:26:58.448 Generation Counter: 2 00:26:58.448 Number of Records: 2 00:26:58.448 Record Format: 0 00:26:58.448 00:26:58.448 Discovery Log Entry 0 00:26:58.448 ---------------------- 00:26:58.448 Transport Type: 3 (TCP) 00:26:58.448 Address Family: 1 (IPv4) 00:26:58.448 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:58.448 Entry Flags: 00:26:58.448 Duplicate Returned Information: 0 00:26:58.448 Explicit Persistent Connection Support for Discovery: 0 00:26:58.448 Transport Requirements: 00:26:58.448 Secure Channel: Not Specified 00:26:58.448 Port ID: 1 (0x0001) 00:26:58.448 Controller ID: 65535 (0xffff) 00:26:58.448 Admin Max SQ Size: 32 00:26:58.448 Transport Service Identifier: 4420 00:26:58.448 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:58.448 Transport Address: 10.0.0.1 00:26:58.448 Discovery Log Entry 1 00:26:58.448 ---------------------- 00:26:58.448 Transport Type: 3 (TCP) 00:26:58.448 Address Family: 1 (IPv4) 00:26:58.448 Subsystem Type: 2 (NVM Subsystem) 00:26:58.448 Entry Flags: 00:26:58.448 Duplicate Returned Information: 0 00:26:58.448 Explicit Persistent Connection Support for Discovery: 0 00:26:58.448 Transport Requirements: 00:26:58.448 Secure Channel: Not Specified 00:26:58.448 Port ID: 1 (0x0001) 00:26:58.448 Controller ID: 65535 (0xffff) 00:26:58.448 Admin Max SQ Size: 32 00:26:58.448 Transport Service Identifier: 4420 00:26:58.448 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:58.448 Transport Address: 10.0.0.1 00:26:58.448 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:58.448 get_feature(0x01) failed 00:26:58.448 get_feature(0x02) failed 00:26:58.448 get_feature(0x04) failed 00:26:58.448 ===================================================== 00:26:58.448 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:58.448 ===================================================== 00:26:58.448 Controller Capabilities/Features 00:26:58.448 ================================ 00:26:58.448 Vendor ID: 0000 00:26:58.448 Subsystem Vendor ID: 0000 00:26:58.448 Serial Number: 968beb73da84d280fd01 00:26:58.448 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:58.448 Firmware Version: 6.8.9-20 00:26:58.448 Recommended Arb Burst: 6 00:26:58.448 IEEE OUI Identifier: 00 00 00 00:26:58.448 Multi-path I/O 00:26:58.448 May have multiple subsystem ports: Yes 00:26:58.448 May have multiple controllers: Yes 00:26:58.448 Associated with SR-IOV VF: No 00:26:58.448 Max Data Transfer Size: Unlimited 00:26:58.448 Max Number of Namespaces: 1024 00:26:58.448 Max Number of I/O Queues: 128 00:26:58.448 NVMe Specification Version (VS): 1.3 00:26:58.448 NVMe Specification Version (Identify): 1.3 00:26:58.448 Maximum Queue Entries: 1024 00:26:58.448 Contiguous Queues Required: No 00:26:58.448 Arbitration Mechanisms Supported 00:26:58.448 Weighted Round Robin: Not Supported 00:26:58.448 Vendor Specific: Not Supported 00:26:58.448 Reset Timeout: 7500 ms 00:26:58.448 Doorbell Stride: 4 bytes 00:26:58.448 NVM Subsystem Reset: Not Supported 00:26:58.448 Command Sets Supported 00:26:58.448 NVM Command Set: Supported 00:26:58.448 Boot Partition: Not Supported 00:26:58.448 Memory Page Size Minimum: 4096 bytes 00:26:58.448 Memory Page Size Maximum: 4096 bytes 00:26:58.448 Persistent Memory Region: Not Supported 00:26:58.448 Optional Asynchronous Events Supported 00:26:58.448 Namespace Attribute Notices: Supported 00:26:58.448 Firmware Activation Notices: Not Supported 00:26:58.448 ANA Change Notices: Supported 00:26:58.448 PLE Aggregate Log Change Notices: Not Supported 00:26:58.448 LBA Status Info Alert Notices: Not Supported 00:26:58.448 EGE Aggregate Log Change Notices: Not Supported 00:26:58.448 Normal NVM Subsystem Shutdown event: Not Supported 00:26:58.448 Zone Descriptor Change Notices: Not Supported 00:26:58.448 Discovery Log Change Notices: Not Supported 00:26:58.448 Controller Attributes 00:26:58.448 128-bit Host Identifier: Supported 00:26:58.448 Non-Operational Permissive Mode: Not Supported 00:26:58.448 NVM Sets: Not Supported 00:26:58.448 Read Recovery Levels: Not Supported 00:26:58.448 Endurance Groups: Not Supported 00:26:58.448 Predictable Latency Mode: Not Supported 00:26:58.448 Traffic Based Keep ALive: Supported 00:26:58.448 Namespace Granularity: Not Supported 00:26:58.448 SQ Associations: Not Supported 00:26:58.448 UUID List: Not Supported 00:26:58.448 Multi-Domain Subsystem: Not Supported 00:26:58.448 Fixed Capacity Management: Not Supported 00:26:58.448 Variable Capacity Management: Not Supported 00:26:58.448 Delete Endurance Group: Not Supported 00:26:58.448 Delete NVM Set: Not Supported 00:26:58.448 Extended LBA Formats Supported: Not Supported 00:26:58.448 Flexible Data Placement Supported: Not Supported 00:26:58.448 00:26:58.448 Controller Memory Buffer Support 00:26:58.448 ================================ 00:26:58.448 Supported: No 00:26:58.448 00:26:58.448 Persistent Memory Region Support 00:26:58.448 ================================ 00:26:58.448 Supported: No 00:26:58.448 00:26:58.448 Admin Command Set Attributes 00:26:58.448 ============================ 00:26:58.448 Security Send/Receive: Not Supported 00:26:58.448 Format NVM: Not Supported 00:26:58.448 Firmware Activate/Download: Not Supported 00:26:58.448 Namespace Management: Not Supported 00:26:58.448 Device Self-Test: Not Supported 00:26:58.448 Directives: Not Supported 00:26:58.448 NVMe-MI: Not Supported 00:26:58.448 Virtualization Management: Not Supported 00:26:58.448 Doorbell Buffer Config: Not Supported 00:26:58.448 Get LBA Status Capability: Not Supported 00:26:58.448 Command & Feature Lockdown Capability: Not Supported 00:26:58.448 Abort Command Limit: 4 00:26:58.448 Async Event Request Limit: 4 00:26:58.448 Number of Firmware Slots: N/A 00:26:58.448 Firmware Slot 1 Read-Only: N/A 00:26:58.448 Firmware Activation Without Reset: N/A 00:26:58.448 Multiple Update Detection Support: N/A 00:26:58.448 Firmware Update Granularity: No Information Provided 00:26:58.448 Per-Namespace SMART Log: Yes 00:26:58.448 Asymmetric Namespace Access Log Page: Supported 00:26:58.448 ANA Transition Time : 10 sec 00:26:58.448 00:26:58.448 Asymmetric Namespace Access Capabilities 00:26:58.448 ANA Optimized State : Supported 00:26:58.448 ANA Non-Optimized State : Supported 00:26:58.448 ANA Inaccessible State : Supported 00:26:58.448 ANA Persistent Loss State : Supported 00:26:58.448 ANA Change State : Supported 00:26:58.448 ANAGRPID is not changed : No 00:26:58.448 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:58.448 00:26:58.448 ANA Group Identifier Maximum : 128 00:26:58.448 Number of ANA Group Identifiers : 128 00:26:58.448 Max Number of Allowed Namespaces : 1024 00:26:58.448 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:58.448 Command Effects Log Page: Supported 00:26:58.448 Get Log Page Extended Data: Supported 00:26:58.448 Telemetry Log Pages: Not Supported 00:26:58.448 Persistent Event Log Pages: Not Supported 00:26:58.448 Supported Log Pages Log Page: May Support 00:26:58.448 Commands Supported & Effects Log Page: Not Supported 00:26:58.448 Feature Identifiers & Effects Log Page:May Support 00:26:58.448 NVMe-MI Commands & Effects Log Page: May Support 00:26:58.448 Data Area 4 for Telemetry Log: Not Supported 00:26:58.448 Error Log Page Entries Supported: 128 00:26:58.448 Keep Alive: Supported 00:26:58.448 Keep Alive Granularity: 1000 ms 00:26:58.448 00:26:58.448 NVM Command Set Attributes 00:26:58.448 ========================== 00:26:58.448 Submission Queue Entry Size 00:26:58.448 Max: 64 00:26:58.448 Min: 64 00:26:58.448 Completion Queue Entry Size 00:26:58.448 Max: 16 00:26:58.448 Min: 16 00:26:58.448 Number of Namespaces: 1024 00:26:58.448 Compare Command: Not Supported 00:26:58.448 Write Uncorrectable Command: Not Supported 00:26:58.448 Dataset Management Command: Supported 00:26:58.448 Write Zeroes Command: Supported 00:26:58.449 Set Features Save Field: Not Supported 00:26:58.449 Reservations: Not Supported 00:26:58.449 Timestamp: Not Supported 00:26:58.449 Copy: Not Supported 00:26:58.449 Volatile Write Cache: Present 00:26:58.449 Atomic Write Unit (Normal): 1 00:26:58.449 Atomic Write Unit (PFail): 1 00:26:58.449 Atomic Compare & Write Unit: 1 00:26:58.449 Fused Compare & Write: Not Supported 00:26:58.449 Scatter-Gather List 00:26:58.449 SGL Command Set: Supported 00:26:58.449 SGL Keyed: Not Supported 00:26:58.449 SGL Bit Bucket Descriptor: Not Supported 00:26:58.449 SGL Metadata Pointer: Not Supported 00:26:58.449 Oversized SGL: Not Supported 00:26:58.449 SGL Metadata Address: Not Supported 00:26:58.449 SGL Offset: Supported 00:26:58.449 Transport SGL Data Block: Not Supported 00:26:58.449 Replay Protected Memory Block: Not Supported 00:26:58.449 00:26:58.449 Firmware Slot Information 00:26:58.449 ========================= 00:26:58.449 Active slot: 0 00:26:58.449 00:26:58.449 Asymmetric Namespace Access 00:26:58.449 =========================== 00:26:58.449 Change Count : 0 00:26:58.449 Number of ANA Group Descriptors : 1 00:26:58.449 ANA Group Descriptor : 0 00:26:58.449 ANA Group ID : 1 00:26:58.449 Number of NSID Values : 1 00:26:58.449 Change Count : 0 00:26:58.449 ANA State : 1 00:26:58.449 Namespace Identifier : 1 00:26:58.449 00:26:58.449 Commands Supported and Effects 00:26:58.449 ============================== 00:26:58.449 Admin Commands 00:26:58.449 -------------- 00:26:58.449 Get Log Page (02h): Supported 00:26:58.449 Identify (06h): Supported 00:26:58.449 Abort (08h): Supported 00:26:58.449 Set Features (09h): Supported 00:26:58.449 Get Features (0Ah): Supported 00:26:58.449 Asynchronous Event Request (0Ch): Supported 00:26:58.449 Keep Alive (18h): Supported 00:26:58.449 I/O Commands 00:26:58.449 ------------ 00:26:58.449 Flush (00h): Supported 00:26:58.449 Write (01h): Supported LBA-Change 00:26:58.449 Read (02h): Supported 00:26:58.449 Write Zeroes (08h): Supported LBA-Change 00:26:58.449 Dataset Management (09h): Supported 00:26:58.449 00:26:58.449 Error Log 00:26:58.449 ========= 00:26:58.449 Entry: 0 00:26:58.449 Error Count: 0x3 00:26:58.449 Submission Queue Id: 0x0 00:26:58.449 Command Id: 0x5 00:26:58.449 Phase Bit: 0 00:26:58.449 Status Code: 0x2 00:26:58.449 Status Code Type: 0x0 00:26:58.449 Do Not Retry: 1 00:26:58.449 Error Location: 0x28 00:26:58.449 LBA: 0x0 00:26:58.449 Namespace: 0x0 00:26:58.449 Vendor Log Page: 0x0 00:26:58.449 ----------- 00:26:58.449 Entry: 1 00:26:58.449 Error Count: 0x2 00:26:58.449 Submission Queue Id: 0x0 00:26:58.449 Command Id: 0x5 00:26:58.449 Phase Bit: 0 00:26:58.449 Status Code: 0x2 00:26:58.449 Status Code Type: 0x0 00:26:58.449 Do Not Retry: 1 00:26:58.449 Error Location: 0x28 00:26:58.449 LBA: 0x0 00:26:58.449 Namespace: 0x0 00:26:58.449 Vendor Log Page: 0x0 00:26:58.449 ----------- 00:26:58.449 Entry: 2 00:26:58.449 Error Count: 0x1 00:26:58.449 Submission Queue Id: 0x0 00:26:58.449 Command Id: 0x4 00:26:58.449 Phase Bit: 0 00:26:58.449 Status Code: 0x2 00:26:58.449 Status Code Type: 0x0 00:26:58.449 Do Not Retry: 1 00:26:58.449 Error Location: 0x28 00:26:58.449 LBA: 0x0 00:26:58.449 Namespace: 0x0 00:26:58.449 Vendor Log Page: 0x0 00:26:58.449 00:26:58.449 Number of Queues 00:26:58.449 ================ 00:26:58.449 Number of I/O Submission Queues: 128 00:26:58.449 Number of I/O Completion Queues: 128 00:26:58.449 00:26:58.449 ZNS Specific Controller Data 00:26:58.449 ============================ 00:26:58.449 Zone Append Size Limit: 0 00:26:58.449 00:26:58.449 00:26:58.449 Active Namespaces 00:26:58.449 ================= 00:26:58.449 get_feature(0x05) failed 00:26:58.449 Namespace ID:1 00:26:58.449 Command Set Identifier: NVM (00h) 00:26:58.449 Deallocate: Supported 00:26:58.449 Deallocated/Unwritten Error: Not Supported 00:26:58.449 Deallocated Read Value: Unknown 00:26:58.449 Deallocate in Write Zeroes: Not Supported 00:26:58.449 Deallocated Guard Field: 0xFFFF 00:26:58.449 Flush: Supported 00:26:58.449 Reservation: Not Supported 00:26:58.449 Namespace Sharing Capabilities: Multiple Controllers 00:26:58.449 Size (in LBAs): 1953525168 (931GiB) 00:26:58.449 Capacity (in LBAs): 1953525168 (931GiB) 00:26:58.449 Utilization (in LBAs): 1953525168 (931GiB) 00:26:58.449 UUID: 3c10d9bb-4c63-46a3-956d-105d04a810f2 00:26:58.449 Thin Provisioning: Not Supported 00:26:58.449 Per-NS Atomic Units: Yes 00:26:58.449 Atomic Boundary Size (Normal): 0 00:26:58.449 Atomic Boundary Size (PFail): 0 00:26:58.449 Atomic Boundary Offset: 0 00:26:58.449 NGUID/EUI64 Never Reused: No 00:26:58.449 ANA group ID: 1 00:26:58.449 Namespace Write Protected: No 00:26:58.449 Number of LBA Formats: 1 00:26:58.449 Current LBA Format: LBA Format #00 00:26:58.449 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:58.449 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:58.449 rmmod nvme_tcp 00:26:58.449 rmmod nvme_fabrics 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.449 13:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.983 13:11:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:00.983 13:11:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:00.983 13:11:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:00.983 13:11:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:00.983 13:11:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:00.983 13:11:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:00.983 13:11:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:00.983 13:11:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:00.983 13:11:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:00.983 13:11:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:00.983 13:11:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:02.884 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:03.142 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:04.079 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:04.079 00:27:04.079 real 0m15.303s 00:27:04.079 user 0m3.636s 00:27:04.079 sys 0m7.988s 00:27:04.079 13:11:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:04.079 13:11:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:04.079 ************************************ 00:27:04.079 END TEST nvmf_identify_kernel_target 00:27:04.079 ************************************ 00:27:04.079 13:11:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:04.079 13:11:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:04.079 13:11:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:04.079 13:11:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.079 ************************************ 00:27:04.079 START TEST nvmf_auth_host 00:27:04.079 ************************************ 00:27:04.079 13:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:04.338 * Looking for test storage... 00:27:04.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.338 13:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:04.338 13:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:04.338 13:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:04.338 13:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:04.338 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:04.338 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:04.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.339 --rc genhtml_branch_coverage=1 00:27:04.339 --rc genhtml_function_coverage=1 00:27:04.339 --rc genhtml_legend=1 00:27:04.339 --rc geninfo_all_blocks=1 00:27:04.339 --rc geninfo_unexecuted_blocks=1 00:27:04.339 00:27:04.339 ' 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:04.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.339 --rc genhtml_branch_coverage=1 00:27:04.339 --rc genhtml_function_coverage=1 00:27:04.339 --rc genhtml_legend=1 00:27:04.339 --rc geninfo_all_blocks=1 00:27:04.339 --rc geninfo_unexecuted_blocks=1 00:27:04.339 00:27:04.339 ' 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:04.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.339 --rc genhtml_branch_coverage=1 00:27:04.339 --rc genhtml_function_coverage=1 00:27:04.339 --rc genhtml_legend=1 00:27:04.339 --rc geninfo_all_blocks=1 00:27:04.339 --rc geninfo_unexecuted_blocks=1 00:27:04.339 00:27:04.339 ' 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:04.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.339 --rc genhtml_branch_coverage=1 00:27:04.339 --rc genhtml_function_coverage=1 00:27:04.339 --rc genhtml_legend=1 00:27:04.339 --rc geninfo_all_blocks=1 00:27:04.339 --rc geninfo_unexecuted_blocks=1 00:27:04.339 00:27:04.339 ' 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:04.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.339 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:04.340 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:04.340 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:04.340 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.340 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.340 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.340 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:04.340 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:04.340 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:04.340 13:11:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:10.912 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:10.912 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:10.912 Found net devices under 0000:86:00.0: cvl_0_0 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:10.912 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:10.913 Found net devices under 0000:86:00.1: cvl_0_1 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:10.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:27:10.913 00:27:10.913 --- 10.0.0.2 ping statistics --- 00:27:10.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.913 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:27:10.913 00:27:10.913 --- 10.0.0.1 ping statistics --- 00:27:10.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.913 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2123975 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2123975 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2123975 ']' 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.913 13:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c4d168ed148c69d308412e4576800a7a 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.aXn 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c4d168ed148c69d308412e4576800a7a 0 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c4d168ed148c69d308412e4576800a7a 0 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c4d168ed148c69d308412e4576800a7a 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.aXn 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.aXn 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.aXn 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7da21c6ee83dffaa0e527d3c654c0db5cf4ce89bbbc2d83a7fbf451f4f224a4b 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.r2Z 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7da21c6ee83dffaa0e527d3c654c0db5cf4ce89bbbc2d83a7fbf451f4f224a4b 3 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7da21c6ee83dffaa0e527d3c654c0db5cf4ce89bbbc2d83a7fbf451f4f224a4b 3 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7da21c6ee83dffaa0e527d3c654c0db5cf4ce89bbbc2d83a7fbf451f4f224a4b 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.r2Z 00:27:10.913 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.r2Z 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.r2Z 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a2f6383a78a9c09617b0ff370b4e94e16042c5535c82c4df 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.biV 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a2f6383a78a9c09617b0ff370b4e94e16042c5535c82c4df 0 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a2f6383a78a9c09617b0ff370b4e94e16042c5535c82c4df 0 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a2f6383a78a9c09617b0ff370b4e94e16042c5535c82c4df 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.biV 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.biV 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.biV 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7b399608a47826a03f17441ec44d275f0d3d2ea25658112f 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.H53 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7b399608a47826a03f17441ec44d275f0d3d2ea25658112f 2 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7b399608a47826a03f17441ec44d275f0d3d2ea25658112f 2 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7b399608a47826a03f17441ec44d275f0d3d2ea25658112f 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.H53 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.H53 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.H53 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c97a0da3bc364ec8595c1ef731416e90 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.i0b 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c97a0da3bc364ec8595c1ef731416e90 1 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c97a0da3bc364ec8595c1ef731416e90 1 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c97a0da3bc364ec8595c1ef731416e90 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.i0b 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.i0b 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.i0b 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=157de3bcce137e782f725657c173c5fc 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.hni 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 157de3bcce137e782f725657c173c5fc 1 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 157de3bcce137e782f725657c173c5fc 1 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=157de3bcce137e782f725657c173c5fc 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.hni 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.hni 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.hni 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c70eb075f79d6eb071189e2683353aff135c88dca2ff4f36 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.aht 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c70eb075f79d6eb071189e2683353aff135c88dca2ff4f36 2 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c70eb075f79d6eb071189e2683353aff135c88dca2ff4f36 2 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c70eb075f79d6eb071189e2683353aff135c88dca2ff4f36 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.aht 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.aht 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.aht 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e4aee839c5982eb861ec06bed50065c7 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Fqf 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e4aee839c5982eb861ec06bed50065c7 0 00:27:10.914 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e4aee839c5982eb861ec06bed50065c7 0 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e4aee839c5982eb861ec06bed50065c7 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Fqf 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Fqf 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Fqf 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6aff2397d0fbec7f3c59e6f6be8a48179dbae1e8b3c3328fecf4338f4efc0b37 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ww5 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6aff2397d0fbec7f3c59e6f6be8a48179dbae1e8b3c3328fecf4338f4efc0b37 3 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6aff2397d0fbec7f3c59e6f6be8a48179dbae1e8b3c3328fecf4338f4efc0b37 3 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6aff2397d0fbec7f3c59e6f6be8a48179dbae1e8b3c3328fecf4338f4efc0b37 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:10.915 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ww5 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ww5 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Ww5 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2123975 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2123975 ']' 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aXn 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.r2Z ]] 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.r2Z 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.biV 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.H53 ]] 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.H53 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.i0b 00:27:11.174 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.hni ]] 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hni 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.aht 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Fqf ]] 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Fqf 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.175 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.434 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.434 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:11.434 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Ww5 00:27:11.434 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.434 13:11:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:11.434 13:11:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:13.967 Waiting for block devices as requested 00:27:13.967 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:27:14.227 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:14.227 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:14.227 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:14.227 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:14.486 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:14.486 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:14.486 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:14.486 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:14.745 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:14.745 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:14.745 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:14.745 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:15.004 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:15.004 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:15.004 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:15.263 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:15.831 No valid GPT data, bailing 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:15.831 00:27:15.831 Discovery Log Number of Records 2, Generation counter 2 00:27:15.831 =====Discovery Log Entry 0====== 00:27:15.831 trtype: tcp 00:27:15.831 adrfam: ipv4 00:27:15.831 subtype: current discovery subsystem 00:27:15.831 treq: not specified, sq flow control disable supported 00:27:15.831 portid: 1 00:27:15.831 trsvcid: 4420 00:27:15.831 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:15.831 traddr: 10.0.0.1 00:27:15.831 eflags: none 00:27:15.831 sectype: none 00:27:15.831 =====Discovery Log Entry 1====== 00:27:15.831 trtype: tcp 00:27:15.831 adrfam: ipv4 00:27:15.831 subtype: nvme subsystem 00:27:15.831 treq: not specified, sq flow control disable supported 00:27:15.831 portid: 1 00:27:15.831 trsvcid: 4420 00:27:15.831 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:15.831 traddr: 10.0.0.1 00:27:15.831 eflags: none 00:27:15.831 sectype: none 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.831 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.091 nvme0n1 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.091 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.351 nvme0n1 00:27:16.351 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.351 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.351 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.351 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.351 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.351 13:11:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.351 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.610 nvme0n1 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.610 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.611 nvme0n1 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.611 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.871 nvme0n1 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.871 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.132 nvme0n1 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.132 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.133 13:11:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.392 nvme0n1 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.392 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.651 nvme0n1 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.651 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.652 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.652 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.652 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.652 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.652 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.652 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.652 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.652 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.652 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.652 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.910 nvme0n1 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.910 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.168 nvme0n1 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.168 13:11:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.426 nvme0n1 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:18.426 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.427 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.685 nvme0n1 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.685 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.943 nvme0n1 00:27:18.943 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.201 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.201 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.201 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.201 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.201 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.201 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.201 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.201 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.201 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.201 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.201 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.202 13:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.459 nvme0n1 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:19.459 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.460 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.718 nvme0n1 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.718 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.976 nvme0n1 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.976 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.234 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.235 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.235 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.235 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.235 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.235 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.235 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.235 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.235 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.235 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.235 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.235 13:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.492 nvme0n1 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.492 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.493 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.493 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.493 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.493 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.493 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.493 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.493 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.493 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.493 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.493 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.493 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.493 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.056 nvme0n1 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:21.056 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.057 13:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.314 nvme0n1 00:27:21.314 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.314 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.314 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.314 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.314 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.314 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.571 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.571 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.571 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.571 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.571 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.571 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.571 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:21.571 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.572 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.829 nvme0n1 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.829 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.830 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.830 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.830 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.830 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.830 13:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.395 nvme0n1 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.395 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.961 nvme0n1 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.961 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.962 13:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.527 nvme0n1 00:27:23.527 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.527 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.527 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.527 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.527 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.785 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.785 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.785 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.785 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.785 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.785 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.785 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.785 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:23.785 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.785 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.785 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.785 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.786 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.352 nvme0n1 00:27:24.352 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.352 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.352 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.352 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.352 13:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.352 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.352 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.352 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.352 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.352 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.352 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.353 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.919 nvme0n1 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.919 13:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.853 nvme0n1 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.854 nvme0n1 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.854 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.113 nvme0n1 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.113 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.371 nvme0n1 00:27:26.371 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.371 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.371 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.371 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.371 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.371 13:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.371 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.372 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.372 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.372 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.372 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.372 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.372 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.372 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.372 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.372 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.372 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.629 nvme0n1 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:26.629 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.630 nvme0n1 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.630 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.887 nvme0n1 00:27:26.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.888 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.888 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.888 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.888 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.888 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.145 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.146 nvme0n1 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.146 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.403 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.403 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.403 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.403 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.403 13:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.403 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.404 nvme0n1 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.404 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.662 nvme0n1 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.662 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.920 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.920 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.921 nvme0n1 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.921 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.180 13:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.439 nvme0n1 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.439 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.698 nvme0n1 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.698 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.956 nvme0n1 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.956 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.213 13:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.213 nvme0n1 00:27:29.213 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.213 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.213 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.213 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.213 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.470 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.470 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.470 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.470 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.470 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.471 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.728 nvme0n1 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.728 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.985 nvme0n1 00:27:29.985 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.985 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.985 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.985 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.985 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.244 13:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.503 nvme0n1 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.503 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.762 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.763 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.763 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.763 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.763 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.763 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.763 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.763 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.763 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.763 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.763 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.763 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.763 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.763 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.022 nvme0n1 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.022 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.023 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.023 13:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.591 nvme0n1 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.591 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.850 nvme0n1 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.850 13:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.418 nvme0n1 00:27:32.418 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.418 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.418 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.418 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.418 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.677 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.677 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.677 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.678 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.246 nvme0n1 00:27:33.246 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.246 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.246 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.246 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.246 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.246 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.246 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.246 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.246 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.246 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.246 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.246 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.247 13:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.816 nvme0n1 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.816 13:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.384 nvme0n1 00:27:34.384 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.384 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.384 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.384 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.384 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.384 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.642 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.642 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.642 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.642 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.643 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.213 nvme0n1 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:35.213 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.214 13:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.214 nvme0n1 00:27:35.214 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.214 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.214 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.214 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.214 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.573 nvme0n1 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.843 nvme0n1 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.843 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.844 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.163 nvme0n1 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.163 nvme0n1 00:27:36.163 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.164 13:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.441 nvme0n1 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.441 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.442 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.700 nvme0n1 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.700 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.959 nvme0n1 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.959 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.219 nvme0n1 00:27:37.219 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.219 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.219 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.219 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.219 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.219 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.219 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.219 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.219 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.219 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.219 13:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.219 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.478 nvme0n1 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.478 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.479 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.479 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.479 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.479 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.479 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.479 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.479 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.479 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.737 nvme0n1 00:27:37.737 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.737 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.737 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.737 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.737 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.737 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.737 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.737 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.737 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.737 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.996 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.256 nvme0n1 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.256 13:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.515 nvme0n1 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:38.515 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.516 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.774 nvme0n1 00:27:38.774 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.774 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.774 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.774 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.774 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.774 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.774 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.774 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.774 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.774 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.774 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.775 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.033 nvme0n1 00:27:39.033 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.033 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.033 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.033 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.033 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.033 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.292 13:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.550 nvme0n1 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.551 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.119 nvme0n1 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.119 13:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.378 nvme0n1 00:27:40.378 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.378 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.378 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.378 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.378 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.638 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.897 nvme0n1 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.897 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.156 13:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.415 nvme0n1 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzRkMTY4ZWQxNDhjNjlkMzA4NDEyZTQ1NzY4MDBhN2GSUwrA: 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: ]] 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2RhMjFjNmVlODNkZmZhYTBlNTI3ZDNjNjU0YzBkYjVjZjRjZTg5YmJiYzJkODNhN2ZiZjQ1MWY0ZjIyNGE0YhPy1n4=: 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.415 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.983 nvme0n1 00:27:41.983 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.983 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.983 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.983 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.983 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.983 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.983 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.983 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.983 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.983 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.242 13:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.809 nvme0n1 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.809 13:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.376 nvme0n1 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:43.376 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcwZWIwNzVmNzlkNmViMDcxMTg5ZTI2ODMzNTNhZmYxMzVjODhkY2EyZmY0ZjM2duZeCw==: 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: ]] 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZWU4MzljNTk4MmViODYxZWMwNmJlZDUwMDY1Yzf7daXY: 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.377 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.943 nvme0n1 00:27:43.943 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.943 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.943 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.943 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmZjIzOTdkMGZiZWM3ZjNjNTllNmY2YmU4YTQ4MTc5ZGJhZTFlOGIzYzMzMjhmZWNmNDMzOGY0ZWZjMGIzN6HyEPc=: 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.944 13:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.509 nvme0n1 00:27:44.509 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.509 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.509 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.509 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.509 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.509 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.767 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.768 request: 00:27:44.768 { 00:27:44.768 "name": "nvme0", 00:27:44.768 "trtype": "tcp", 00:27:44.768 "traddr": "10.0.0.1", 00:27:44.768 "adrfam": "ipv4", 00:27:44.768 "trsvcid": "4420", 00:27:44.768 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:44.768 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:44.768 "prchk_reftag": false, 00:27:44.768 "prchk_guard": false, 00:27:44.768 "hdgst": false, 00:27:44.768 "ddgst": false, 00:27:44.768 "allow_unrecognized_csi": false, 00:27:44.768 "method": "bdev_nvme_attach_controller", 00:27:44.768 "req_id": 1 00:27:44.768 } 00:27:44.768 Got JSON-RPC error response 00:27:44.768 response: 00:27:44.768 { 00:27:44.768 "code": -5, 00:27:44.768 "message": "Input/output error" 00:27:44.768 } 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.768 request: 00:27:44.768 { 00:27:44.768 "name": "nvme0", 00:27:44.768 "trtype": "tcp", 00:27:44.768 "traddr": "10.0.0.1", 00:27:44.768 "adrfam": "ipv4", 00:27:44.768 "trsvcid": "4420", 00:27:44.768 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:44.768 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:44.768 "prchk_reftag": false, 00:27:44.768 "prchk_guard": false, 00:27:44.768 "hdgst": false, 00:27:44.768 "ddgst": false, 00:27:44.768 "dhchap_key": "key2", 00:27:44.768 "allow_unrecognized_csi": false, 00:27:44.768 "method": "bdev_nvme_attach_controller", 00:27:44.768 "req_id": 1 00:27:44.768 } 00:27:44.768 Got JSON-RPC error response 00:27:44.768 response: 00:27:44.768 { 00:27:44.768 "code": -5, 00:27:44.768 "message": "Input/output error" 00:27:44.768 } 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.768 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.027 request: 00:27:45.027 { 00:27:45.027 "name": "nvme0", 00:27:45.027 "trtype": "tcp", 00:27:45.027 "traddr": "10.0.0.1", 00:27:45.027 "adrfam": "ipv4", 00:27:45.027 "trsvcid": "4420", 00:27:45.027 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:45.027 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:45.027 "prchk_reftag": false, 00:27:45.027 "prchk_guard": false, 00:27:45.027 "hdgst": false, 00:27:45.027 "ddgst": false, 00:27:45.027 "dhchap_key": "key1", 00:27:45.027 "dhchap_ctrlr_key": "ckey2", 00:27:45.027 "allow_unrecognized_csi": false, 00:27:45.027 "method": "bdev_nvme_attach_controller", 00:27:45.027 "req_id": 1 00:27:45.027 } 00:27:45.027 Got JSON-RPC error response 00:27:45.027 response: 00:27:45.027 { 00:27:45.027 "code": -5, 00:27:45.027 "message": "Input/output error" 00:27:45.027 } 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.027 nvme0n1 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.027 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.285 request: 00:27:45.285 { 00:27:45.285 "name": "nvme0", 00:27:45.285 "dhchap_key": "key1", 00:27:45.285 "dhchap_ctrlr_key": "ckey2", 00:27:45.285 "method": "bdev_nvme_set_keys", 00:27:45.285 "req_id": 1 00:27:45.285 } 00:27:45.285 Got JSON-RPC error response 00:27:45.285 response: 00:27:45.285 { 00:27:45.285 "code": -13, 00:27:45.285 "message": "Permission denied" 00:27:45.285 } 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.285 13:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.285 13:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.285 13:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:45.285 13:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:46.654 13:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.654 13:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:46.654 13:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.654 13:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.654 13:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.654 13:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:46.654 13:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTJmNjM4M2E3OGE5YzA5NjE3YjBmZjM3MGI0ZTk0ZTE2MDQyYzU1MzVjODJjNGRmjf/8LA==: 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: ]] 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:N2IzOTk2MDhhNDc4MjZhMDNmMTc0NDFlYzQ0ZDI3NWYwZDNkMmVhMjU2NTgxMTJmPi/aZg==: 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.588 nvme0n1 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk3YTBkYTNiYzM2NGVjODU5NWMxZWY3MzE0MTZlOTBxG4Ka: 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: ]] 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTU3ZGUzYmNjZTEzN2U3ODJmNzI1NjU3YzE3M2M1ZmNj8JNX: 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.588 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.588 request: 00:27:47.588 { 00:27:47.588 "name": "nvme0", 00:27:47.588 "dhchap_key": "key2", 00:27:47.588 "dhchap_ctrlr_key": "ckey1", 00:27:47.588 "method": "bdev_nvme_set_keys", 00:27:47.588 "req_id": 1 00:27:47.589 } 00:27:47.589 Got JSON-RPC error response 00:27:47.589 response: 00:27:47.589 { 00:27:47.589 "code": -13, 00:27:47.589 "message": "Permission denied" 00:27:47.589 } 00:27:47.589 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:47.589 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:47.589 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:47.589 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:47.589 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:47.589 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.589 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.589 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.589 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:47.589 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.589 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:47.589 13:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:48.962 rmmod nvme_tcp 00:27:48.962 rmmod nvme_fabrics 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2123975 ']' 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2123975 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2123975 ']' 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2123975 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2123975 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2123975' 00:27:48.962 killing process with pid 2123975 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2123975 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2123975 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:48.962 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:48.963 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.963 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:48.963 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.963 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.963 13:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.492 13:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:51.492 13:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:51.492 13:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:51.492 13:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:51.492 13:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:51.492 13:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:51.492 13:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:51.492 13:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:51.492 13:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:51.492 13:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:51.492 13:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:51.492 13:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:51.492 13:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:53.388 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:53.388 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:53.388 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:53.388 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:53.646 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:53.646 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:53.646 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:53.646 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:53.646 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:53.646 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:53.646 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:53.646 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:53.646 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:53.646 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:53.646 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:53.646 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:54.581 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:54.581 13:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.aXn /tmp/spdk.key-null.biV /tmp/spdk.key-sha256.i0b /tmp/spdk.key-sha384.aht /tmp/spdk.key-sha512.Ww5 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:54.581 13:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:57.110 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:57.110 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:57.110 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:57.110 00:27:57.110 real 0m53.076s 00:27:57.110 user 0m47.925s 00:27:57.110 sys 0m12.131s 00:27:57.110 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:57.110 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.110 ************************************ 00:27:57.110 END TEST nvmf_auth_host 00:27:57.110 ************************************ 00:27:57.369 13:11:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:57.369 13:11:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:57.369 13:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:57.369 13:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:57.369 13:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.369 ************************************ 00:27:57.369 START TEST nvmf_digest 00:27:57.369 ************************************ 00:27:57.369 13:11:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:57.369 * Looking for test storage... 00:27:57.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:57.369 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:57.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.370 --rc genhtml_branch_coverage=1 00:27:57.370 --rc genhtml_function_coverage=1 00:27:57.370 --rc genhtml_legend=1 00:27:57.370 --rc geninfo_all_blocks=1 00:27:57.370 --rc geninfo_unexecuted_blocks=1 00:27:57.370 00:27:57.370 ' 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:57.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.370 --rc genhtml_branch_coverage=1 00:27:57.370 --rc genhtml_function_coverage=1 00:27:57.370 --rc genhtml_legend=1 00:27:57.370 --rc geninfo_all_blocks=1 00:27:57.370 --rc geninfo_unexecuted_blocks=1 00:27:57.370 00:27:57.370 ' 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:57.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.370 --rc genhtml_branch_coverage=1 00:27:57.370 --rc genhtml_function_coverage=1 00:27:57.370 --rc genhtml_legend=1 00:27:57.370 --rc geninfo_all_blocks=1 00:27:57.370 --rc geninfo_unexecuted_blocks=1 00:27:57.370 00:27:57.370 ' 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:57.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.370 --rc genhtml_branch_coverage=1 00:27:57.370 --rc genhtml_function_coverage=1 00:27:57.370 --rc genhtml_legend=1 00:27:57.370 --rc geninfo_all_blocks=1 00:27:57.370 --rc geninfo_unexecuted_blocks=1 00:27:57.370 00:27:57.370 ' 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:57.370 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:57.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:57.629 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:57.630 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.630 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:57.630 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:57.630 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:57.630 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.630 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.630 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.630 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:57.630 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:57.630 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:57.630 13:11:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:02.897 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:02.897 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:02.897 Found net devices under 0000:86:00.0: cvl_0_0 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:02.897 Found net devices under 0000:86:00.1: cvl_0_1 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.897 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:02.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:28:02.898 00:28:02.898 --- 10.0.0.2 ping statistics --- 00:28:02.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.898 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:28:02.898 00:28:02.898 --- 10.0.0.1 ping statistics --- 00:28:02.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.898 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:02.898 ************************************ 00:28:02.898 START TEST nvmf_digest_clean 00:28:02.898 ************************************ 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2137735 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2137735 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2137735 ']' 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:02.898 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:02.898 [2024-11-29 13:12:02.613585] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:02.898 [2024-11-29 13:12:02.613627] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.898 [2024-11-29 13:12:02.680718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.158 [2024-11-29 13:12:02.723348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.158 [2024-11-29 13:12:02.723374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.158 [2024-11-29 13:12:02.723382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.158 [2024-11-29 13:12:02.723389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.158 [2024-11-29 13:12:02.723395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.158 [2024-11-29 13:12:02.723964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.158 null0 00:28:03.158 [2024-11-29 13:12:02.898113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.158 [2024-11-29 13:12:02.922304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2137889 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2137889 /var/tmp/bperf.sock 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2137889 ']' 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:03.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.158 13:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.158 [2024-11-29 13:12:02.958926] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:03.158 [2024-11-29 13:12:02.958971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2137889 ] 00:28:03.418 [2024-11-29 13:12:03.022402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.418 [2024-11-29 13:12:03.066857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.418 13:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:03.418 13:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:03.418 13:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:03.418 13:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:03.418 13:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:03.677 13:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.677 13:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:04.245 nvme0n1 00:28:04.245 13:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:04.245 13:12:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:04.245 Running I/O for 2 seconds... 00:28:06.117 23655.00 IOPS, 92.40 MiB/s [2024-11-29T12:12:05.937Z] 24172.00 IOPS, 94.42 MiB/s 00:28:06.117 Latency(us) 00:28:06.117 [2024-11-29T12:12:05.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.117 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:06.117 nvme0n1 : 2.04 23710.16 92.62 0.00 0.00 5287.57 2521.71 47869.77 00:28:06.117 [2024-11-29T12:12:05.938Z] =================================================================================================================== 00:28:06.118 [2024-11-29T12:12:05.938Z] Total : 23710.16 92.62 0.00 0.00 5287.57 2521.71 47869.77 00:28:06.118 { 00:28:06.118 "results": [ 00:28:06.118 { 00:28:06.118 "job": "nvme0n1", 00:28:06.118 "core_mask": "0x2", 00:28:06.118 "workload": "randread", 00:28:06.118 "status": "finished", 00:28:06.118 "queue_depth": 128, 00:28:06.118 "io_size": 4096, 00:28:06.118 "runtime": 2.044356, 00:28:06.118 "iops": 23710.156156755478, 00:28:06.118 "mibps": 92.61779748732609, 00:28:06.118 "io_failed": 0, 00:28:06.118 "io_timeout": 0, 00:28:06.118 "avg_latency_us": 5287.566188135509, 00:28:06.118 "min_latency_us": 2521.711304347826, 00:28:06.118 "max_latency_us": 47869.77391304348 00:28:06.118 } 00:28:06.118 ], 00:28:06.118 "core_count": 1 00:28:06.118 } 00:28:06.376 13:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:06.376 13:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:06.376 13:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:06.376 13:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:06.376 | select(.opcode=="crc32c") 00:28:06.376 | "\(.module_name) \(.executed)"' 00:28:06.376 13:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:06.376 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:06.376 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:06.376 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:06.376 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:06.376 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2137889 00:28:06.376 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2137889 ']' 00:28:06.376 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2137889 00:28:06.376 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:06.377 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.377 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2137889 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2137889' 00:28:06.635 killing process with pid 2137889 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2137889 00:28:06.635 Received shutdown signal, test time was about 2.000000 seconds 00:28:06.635 00:28:06.635 Latency(us) 00:28:06.635 [2024-11-29T12:12:06.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.635 [2024-11-29T12:12:06.455Z] =================================================================================================================== 00:28:06.635 [2024-11-29T12:12:06.455Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2137889 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2138448 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2138448 /var/tmp/bperf.sock 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2138448 ']' 00:28:06.635 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:06.636 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:06.636 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:06.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:06.636 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:06.636 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:06.636 [2024-11-29 13:12:06.386772] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:06.636 [2024-11-29 13:12:06.386820] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138448 ] 00:28:06.636 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:06.636 Zero copy mechanism will not be used. 00:28:06.636 [2024-11-29 13:12:06.444900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.894 [2024-11-29 13:12:06.489638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.894 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.894 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:06.894 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:06.894 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:06.894 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:07.153 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.153 13:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.412 nvme0n1 00:28:07.412 13:12:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:07.412 13:12:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.412 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:07.412 Zero copy mechanism will not be used. 00:28:07.412 Running I/O for 2 seconds... 00:28:09.724 4823.00 IOPS, 602.88 MiB/s [2024-11-29T12:12:09.544Z] 4878.50 IOPS, 609.81 MiB/s 00:28:09.724 Latency(us) 00:28:09.724 [2024-11-29T12:12:09.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.724 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:09.724 nvme0n1 : 2.00 4878.60 609.83 0.00 0.00 3276.91 698.10 5527.82 00:28:09.724 [2024-11-29T12:12:09.544Z] =================================================================================================================== 00:28:09.724 [2024-11-29T12:12:09.544Z] Total : 4878.60 609.83 0.00 0.00 3276.91 698.10 5527.82 00:28:09.724 { 00:28:09.724 "results": [ 00:28:09.724 { 00:28:09.724 "job": "nvme0n1", 00:28:09.724 "core_mask": "0x2", 00:28:09.724 "workload": "randread", 00:28:09.724 "status": "finished", 00:28:09.724 "queue_depth": 16, 00:28:09.724 "io_size": 131072, 00:28:09.724 "runtime": 2.003237, 00:28:09.724 "iops": 4878.603979459245, 00:28:09.724 "mibps": 609.8254974324057, 00:28:09.724 "io_failed": 0, 00:28:09.724 "io_timeout": 0, 00:28:09.724 "avg_latency_us": 3276.907511822724, 00:28:09.724 "min_latency_us": 698.1008695652174, 00:28:09.724 "max_latency_us": 5527.819130434783 00:28:09.724 } 00:28:09.724 ], 00:28:09.724 "core_count": 1 00:28:09.724 } 00:28:09.724 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:09.724 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:09.724 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:09.724 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:09.724 | select(.opcode=="crc32c") 00:28:09.724 | "\(.module_name) \(.executed)"' 00:28:09.724 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:09.724 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:09.724 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:09.724 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:09.724 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:09.724 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2138448 00:28:09.724 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2138448 ']' 00:28:09.725 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2138448 00:28:09.725 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:09.725 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.725 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2138448 00:28:09.725 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:09.725 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:09.725 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2138448' 00:28:09.725 killing process with pid 2138448 00:28:09.725 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2138448 00:28:09.725 Received shutdown signal, test time was about 2.000000 seconds 00:28:09.725 00:28:09.725 Latency(us) 00:28:09.725 [2024-11-29T12:12:09.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.725 [2024-11-29T12:12:09.545Z] =================================================================================================================== 00:28:09.725 [2024-11-29T12:12:09.545Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.725 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2138448 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2139282 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2139282 /var/tmp/bperf.sock 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2139282 ']' 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:09.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.984 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.984 [2024-11-29 13:12:09.675245] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:09.984 [2024-11-29 13:12:09.675296] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139282 ] 00:28:09.984 [2024-11-29 13:12:09.737089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.984 [2024-11-29 13:12:09.779834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.243 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.243 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:10.243 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:10.243 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:10.243 13:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:10.500 13:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.500 13:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.758 nvme0n1 00:28:10.758 13:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:10.758 13:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:10.758 Running I/O for 2 seconds... 00:28:13.059 26406.00 IOPS, 103.15 MiB/s [2024-11-29T12:12:12.879Z] 26491.00 IOPS, 103.48 MiB/s 00:28:13.059 Latency(us) 00:28:13.059 [2024-11-29T12:12:12.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.059 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:13.059 nvme0n1 : 2.01 26494.38 103.49 0.00 0.00 4822.92 3476.26 8092.27 00:28:13.059 [2024-11-29T12:12:12.879Z] =================================================================================================================== 00:28:13.059 [2024-11-29T12:12:12.879Z] Total : 26494.38 103.49 0.00 0.00 4822.92 3476.26 8092.27 00:28:13.059 { 00:28:13.059 "results": [ 00:28:13.059 { 00:28:13.059 "job": "nvme0n1", 00:28:13.059 "core_mask": "0x2", 00:28:13.059 "workload": "randwrite", 00:28:13.059 "status": "finished", 00:28:13.059 "queue_depth": 128, 00:28:13.059 "io_size": 4096, 00:28:13.059 "runtime": 2.006086, 00:28:13.059 "iops": 26494.377608936007, 00:28:13.059 "mibps": 103.49366253490628, 00:28:13.059 "io_failed": 0, 00:28:13.059 "io_timeout": 0, 00:28:13.059 "avg_latency_us": 4822.923146140946, 00:28:13.059 "min_latency_us": 3476.257391304348, 00:28:13.059 "max_latency_us": 8092.271304347826 00:28:13.059 } 00:28:13.059 ], 00:28:13.059 "core_count": 1 00:28:13.059 } 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:13.059 | select(.opcode=="crc32c") 00:28:13.059 | "\(.module_name) \(.executed)"' 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2139282 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2139282 ']' 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2139282 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2139282 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2139282' 00:28:13.059 killing process with pid 2139282 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2139282 00:28:13.059 Received shutdown signal, test time was about 2.000000 seconds 00:28:13.059 00:28:13.059 Latency(us) 00:28:13.059 [2024-11-29T12:12:12.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.059 [2024-11-29T12:12:12.879Z] =================================================================================================================== 00:28:13.059 [2024-11-29T12:12:12.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:13.059 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2139282 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2139948 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2139948 /var/tmp/bperf.sock 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2139948 ']' 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.318 13:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:13.318 [2024-11-29 13:12:13.044519] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:13.318 [2024-11-29 13:12:13.044570] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139948 ] 00:28:13.318 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:13.318 Zero copy mechanism will not be used. 00:28:13.318 [2024-11-29 13:12:13.107473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.577 [2024-11-29 13:12:13.151501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.577 13:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.577 13:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:13.577 13:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:13.577 13:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:13.577 13:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:13.836 13:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:13.836 13:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.094 nvme0n1 00:28:14.094 13:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:14.094 13:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.094 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:14.094 Zero copy mechanism will not be used. 00:28:14.094 Running I/O for 2 seconds... 00:28:16.404 6204.00 IOPS, 775.50 MiB/s [2024-11-29T12:12:16.224Z] 6544.00 IOPS, 818.00 MiB/s 00:28:16.404 Latency(us) 00:28:16.404 [2024-11-29T12:12:16.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.404 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:16.404 nvme0n1 : 2.00 6542.39 817.80 0.00 0.00 2441.47 1716.76 11169.61 00:28:16.404 [2024-11-29T12:12:16.224Z] =================================================================================================================== 00:28:16.404 [2024-11-29T12:12:16.224Z] Total : 6542.39 817.80 0.00 0.00 2441.47 1716.76 11169.61 00:28:16.404 { 00:28:16.404 "results": [ 00:28:16.404 { 00:28:16.404 "job": "nvme0n1", 00:28:16.404 "core_mask": "0x2", 00:28:16.404 "workload": "randwrite", 00:28:16.404 "status": "finished", 00:28:16.404 "queue_depth": 16, 00:28:16.404 "io_size": 131072, 00:28:16.404 "runtime": 2.00355, 00:28:16.404 "iops": 6542.38726260887, 00:28:16.404 "mibps": 817.7984078261087, 00:28:16.404 "io_failed": 0, 00:28:16.404 "io_timeout": 0, 00:28:16.404 "avg_latency_us": 2441.4743537965533, 00:28:16.404 "min_latency_us": 1716.7582608695652, 00:28:16.404 "max_latency_us": 11169.613913043479 00:28:16.404 } 00:28:16.404 ], 00:28:16.404 "core_count": 1 00:28:16.404 } 00:28:16.404 13:12:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:16.404 13:12:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:16.404 13:12:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:16.404 13:12:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:16.404 | select(.opcode=="crc32c") 00:28:16.404 | "\(.module_name) \(.executed)"' 00:28:16.404 13:12:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:16.404 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:16.404 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:16.404 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:16.404 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:16.404 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2139948 00:28:16.404 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2139948 ']' 00:28:16.404 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2139948 00:28:16.404 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:16.404 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.404 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2139948 00:28:16.405 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:16.405 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:16.405 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2139948' 00:28:16.405 killing process with pid 2139948 00:28:16.405 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2139948 00:28:16.405 Received shutdown signal, test time was about 2.000000 seconds 00:28:16.405 00:28:16.405 Latency(us) 00:28:16.405 [2024-11-29T12:12:16.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.405 [2024-11-29T12:12:16.225Z] =================================================================================================================== 00:28:16.405 [2024-11-29T12:12:16.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:16.405 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2139948 00:28:16.663 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2137735 00:28:16.663 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2137735 ']' 00:28:16.663 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2137735 00:28:16.663 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:16.663 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.663 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2137735 00:28:16.663 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:16.663 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:16.663 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2137735' 00:28:16.663 killing process with pid 2137735 00:28:16.663 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2137735 00:28:16.663 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2137735 00:28:16.922 00:28:16.922 real 0m13.933s 00:28:16.922 user 0m26.651s 00:28:16.922 sys 0m4.503s 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.922 ************************************ 00:28:16.922 END TEST nvmf_digest_clean 00:28:16.922 ************************************ 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:16.922 ************************************ 00:28:16.922 START TEST nvmf_digest_error 00:28:16.922 ************************************ 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2140501 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2140501 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2140501 ']' 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.922 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.922 [2024-11-29 13:12:16.589539] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:16.922 [2024-11-29 13:12:16.589579] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.922 [2024-11-29 13:12:16.650731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.922 [2024-11-29 13:12:16.692688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.922 [2024-11-29 13:12:16.692721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.923 [2024-11-29 13:12:16.692728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.923 [2024-11-29 13:12:16.692734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.923 [2024-11-29 13:12:16.692740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.923 [2024-11-29 13:12:16.693337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.180 [2024-11-29 13:12:16.789872] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.180 null0 00:28:17.180 [2024-11-29 13:12:16.881510] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.180 [2024-11-29 13:12:16.905705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2140523 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2140523 /var/tmp/bperf.sock 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2140523 ']' 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:17.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.180 13:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.180 [2024-11-29 13:12:16.941458] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:17.180 [2024-11-29 13:12:16.941497] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140523 ] 00:28:17.438 [2024-11-29 13:12:17.005669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.438 [2024-11-29 13:12:17.049973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.438 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.438 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:17.438 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.438 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.695 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:17.695 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.695 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.695 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.695 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.695 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.953 nvme0n1 00:28:18.211 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:18.211 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.211 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.211 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.211 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:18.211 13:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.211 Running I/O for 2 seconds... 00:28:18.211 [2024-11-29 13:12:17.897927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.211 [2024-11-29 13:12:17.897966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.211 [2024-11-29 13:12:17.897977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.211 [2024-11-29 13:12:17.909003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.211 [2024-11-29 13:12:17.909029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.211 [2024-11-29 13:12:17.909038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.211 [2024-11-29 13:12:17.919392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.211 [2024-11-29 13:12:17.919414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.211 [2024-11-29 13:12:17.919423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.211 [2024-11-29 13:12:17.927791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.211 [2024-11-29 13:12:17.927813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.211 [2024-11-29 13:12:17.927820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.211 [2024-11-29 13:12:17.939779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.211 [2024-11-29 13:12:17.939800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.211 [2024-11-29 13:12:17.939808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.211 [2024-11-29 13:12:17.952796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.211 [2024-11-29 13:12:17.952818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.211 [2024-11-29 13:12:17.952827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.211 [2024-11-29 13:12:17.962477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.211 [2024-11-29 13:12:17.962502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.211 [2024-11-29 13:12:17.962511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.211 [2024-11-29 13:12:17.971148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.211 [2024-11-29 13:12:17.971168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.211 [2024-11-29 13:12:17.971176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.211 [2024-11-29 13:12:17.980824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.212 [2024-11-29 13:12:17.980844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.212 [2024-11-29 13:12:17.980853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.212 [2024-11-29 13:12:17.990205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.212 [2024-11-29 13:12:17.990225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.212 [2024-11-29 13:12:17.990234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.212 [2024-11-29 13:12:18.000477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.212 [2024-11-29 13:12:18.000498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.212 [2024-11-29 13:12:18.000506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.212 [2024-11-29 13:12:18.009583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.212 [2024-11-29 13:12:18.009603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.212 [2024-11-29 13:12:18.009611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.212 [2024-11-29 13:12:18.019189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.212 [2024-11-29 13:12:18.019209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.212 [2024-11-29 13:12:18.019217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.212 [2024-11-29 13:12:18.029578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.212 [2024-11-29 13:12:18.029600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.212 [2024-11-29 13:12:18.029608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.042413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.042436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.042445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.050408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.050427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.050435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.062512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.062534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.062542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.074144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.074164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.074172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.082641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.082660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.082668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.093676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.093697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.093705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.102110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.102130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.102138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.113832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.113853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.113861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.126708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.126728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.126736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.139503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.139523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.139534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.151268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.151288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.151297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.161048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.161069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.161078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.172835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.172856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.172865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.186166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.186188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.186196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.198858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.198880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.198888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.207142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.207162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.207170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.218921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.218942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.218959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.230441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.471 [2024-11-29 13:12:18.230462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.471 [2024-11-29 13:12:18.230470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.471 [2024-11-29 13:12:18.243151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.472 [2024-11-29 13:12:18.243177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.472 [2024-11-29 13:12:18.243185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.472 [2024-11-29 13:12:18.252010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.472 [2024-11-29 13:12:18.252031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.472 [2024-11-29 13:12:18.252039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.472 [2024-11-29 13:12:18.263878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.472 [2024-11-29 13:12:18.263898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.472 [2024-11-29 13:12:18.263906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.472 [2024-11-29 13:12:18.272281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.472 [2024-11-29 13:12:18.272301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.472 [2024-11-29 13:12:18.272309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.472 [2024-11-29 13:12:18.284482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.472 [2024-11-29 13:12:18.284502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.472 [2024-11-29 13:12:18.284509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.731 [2024-11-29 13:12:18.292964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.731 [2024-11-29 13:12:18.293002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.731 [2024-11-29 13:12:18.293011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.731 [2024-11-29 13:12:18.303831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.731 [2024-11-29 13:12:18.303853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.731 [2024-11-29 13:12:18.303861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.731 [2024-11-29 13:12:18.315390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.731 [2024-11-29 13:12:18.315412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.731 [2024-11-29 13:12:18.315420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.731 [2024-11-29 13:12:18.324723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.731 [2024-11-29 13:12:18.324744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.731 [2024-11-29 13:12:18.324758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.731 [2024-11-29 13:12:18.334554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.731 [2024-11-29 13:12:18.334573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.731 [2024-11-29 13:12:18.334582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.731 [2024-11-29 13:12:18.343559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.731 [2024-11-29 13:12:18.343580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.731 [2024-11-29 13:12:18.343588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.731 [2024-11-29 13:12:18.353239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.731 [2024-11-29 13:12:18.353260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.731 [2024-11-29 13:12:18.353268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.731 [2024-11-29 13:12:18.362427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.731 [2024-11-29 13:12:18.362447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.731 [2024-11-29 13:12:18.362456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.731 [2024-11-29 13:12:18.373333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.731 [2024-11-29 13:12:18.373354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.731 [2024-11-29 13:12:18.373361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.731 [2024-11-29 13:12:18.381251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.381271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.381278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.392243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.392262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.392269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.401221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.401240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.401248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.411550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.411576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.411584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.421859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.421879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.421889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.431432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.431452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.431461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.441015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.441035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.441043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.451142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.451163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.451171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.460880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.460902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.460911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.470500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.470520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.470528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.478806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.478826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.478834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.488712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.488733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.488741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.498584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.498604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.498613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.509291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.509312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.509320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.519305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.519326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.519334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.529111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.529132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.529140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.539274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.539295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.539302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.732 [2024-11-29 13:12:18.548111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.732 [2024-11-29 13:12:18.548133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.732 [2024-11-29 13:12:18.548142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.991 [2024-11-29 13:12:18.558653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.991 [2024-11-29 13:12:18.558675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.991 [2024-11-29 13:12:18.558683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.991 [2024-11-29 13:12:18.567247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.991 [2024-11-29 13:12:18.567278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.991 [2024-11-29 13:12:18.567286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.991 [2024-11-29 13:12:18.578733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.991 [2024-11-29 13:12:18.578754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.991 [2024-11-29 13:12:18.578766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.991 [2024-11-29 13:12:18.589855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.991 [2024-11-29 13:12:18.589875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.991 [2024-11-29 13:12:18.589883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.991 [2024-11-29 13:12:18.598568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.598589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.598597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.608437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.608457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.608465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.618092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.618113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.618120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.628314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.628336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.628343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.637007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.637028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.637036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.647040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.647061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.647069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.656644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.656664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.656672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.665548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.665573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.665581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.676317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.676339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.676347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.687871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.687893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.687900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.696752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.696773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.696781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.708553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.708573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.708581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.717290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.717310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.717319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.729428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.729449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.729457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.738132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.738153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.738162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.749845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.749866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.749874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.758548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.758568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.758576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.770674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.770695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.770703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.781959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.781980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.781988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.792599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.792619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.792628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.992 [2024-11-29 13:12:18.801471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:18.992 [2024-11-29 13:12:18.801492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.992 [2024-11-29 13:12:18.801501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.811672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.811696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.811705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.821770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.821793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.821801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.830074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.830095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.830103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.840616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.840640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.840649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.851892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.851913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.851921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.861135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.861155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.861180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.869870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.869890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.869898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 24728.00 IOPS, 96.59 MiB/s [2024-11-29T12:12:19.072Z] [2024-11-29 13:12:18.881482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.881501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.881509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.890714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.890734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.890742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.901737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.901756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.901764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.911544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.911564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.911572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.920923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.920943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.920958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.931802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.931824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.931832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.941197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.941218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.941226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.949816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.949836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.949843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.960110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.960130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.960138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.970428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.970448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.970455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.978995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.979015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.979023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.990320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.990340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.990348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:18.999838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:18.999857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:18.999865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:19.010426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:19.010445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:19.010457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:19.021454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:19.021475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:19.021482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:19.030280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:19.030300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:19.030308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:19.041998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:19.042017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.252 [2024-11-29 13:12:19.042025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.252 [2024-11-29 13:12:19.050917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.252 [2024-11-29 13:12:19.050937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.253 [2024-11-29 13:12:19.050945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.253 [2024-11-29 13:12:19.062857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.253 [2024-11-29 13:12:19.062877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.253 [2024-11-29 13:12:19.062885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.512 [2024-11-29 13:12:19.071557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.512 [2024-11-29 13:12:19.071579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.512 [2024-11-29 13:12:19.071588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.512 [2024-11-29 13:12:19.083966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.512 [2024-11-29 13:12:19.083988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.512 [2024-11-29 13:12:19.083997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.512 [2024-11-29 13:12:19.096180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.512 [2024-11-29 13:12:19.096201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.512 [2024-11-29 13:12:19.096209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.512 [2024-11-29 13:12:19.107527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.512 [2024-11-29 13:12:19.107550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.512 [2024-11-29 13:12:19.107559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.512 [2024-11-29 13:12:19.116416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.512 [2024-11-29 13:12:19.116436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.512 [2024-11-29 13:12:19.116443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.512 [2024-11-29 13:12:19.126967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.512 [2024-11-29 13:12:19.126987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.512 [2024-11-29 13:12:19.126995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.512 [2024-11-29 13:12:19.136380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.512 [2024-11-29 13:12:19.136399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.512 [2024-11-29 13:12:19.136408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.512 [2024-11-29 13:12:19.145259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.512 [2024-11-29 13:12:19.145279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.512 [2024-11-29 13:12:19.145288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.512 [2024-11-29 13:12:19.155356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.512 [2024-11-29 13:12:19.155375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.512 [2024-11-29 13:12:19.155383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.512 [2024-11-29 13:12:19.165987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.512 [2024-11-29 13:12:19.166007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.512 [2024-11-29 13:12:19.166014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.512 [2024-11-29 13:12:19.175568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.512 [2024-11-29 13:12:19.175589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.175597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.185296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.185315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.185326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.197799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.197820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.197828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.206369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.206389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.206397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.215972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.215992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.216000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.226239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.226259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.226267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.235994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.236014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.236022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.245482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.245503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.245511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.254880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.254905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.254913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.264567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.264589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.264597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.273997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.274022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.274031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.283508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.283528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.283535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.293059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.293078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.293086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.303867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.303887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.303895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.313052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.313072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.313080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.513 [2024-11-29 13:12:19.325021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.513 [2024-11-29 13:12:19.325041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.513 [2024-11-29 13:12:19.325049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.772 [2024-11-29 13:12:19.336839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.772 [2024-11-29 13:12:19.336861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.772 [2024-11-29 13:12:19.336869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.772 [2024-11-29 13:12:19.348034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.772 [2024-11-29 13:12:19.348056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.772 [2024-11-29 13:12:19.348064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.772 [2024-11-29 13:12:19.360618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.772 [2024-11-29 13:12:19.360639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.772 [2024-11-29 13:12:19.360647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.772 [2024-11-29 13:12:19.369124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.772 [2024-11-29 13:12:19.369145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.772 [2024-11-29 13:12:19.369153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.772 [2024-11-29 13:12:19.380986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.772 [2024-11-29 13:12:19.381006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.772 [2024-11-29 13:12:19.381014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.772 [2024-11-29 13:12:19.394038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.772 [2024-11-29 13:12:19.394058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.772 [2024-11-29 13:12:19.394066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.772 [2024-11-29 13:12:19.405494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.772 [2024-11-29 13:12:19.405513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.772 [2024-11-29 13:12:19.405521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.772 [2024-11-29 13:12:19.414691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.772 [2024-11-29 13:12:19.414712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.772 [2024-11-29 13:12:19.414721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.772 [2024-11-29 13:12:19.427357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.772 [2024-11-29 13:12:19.427378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.772 [2024-11-29 13:12:19.427386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.772 [2024-11-29 13:12:19.441131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.772 [2024-11-29 13:12:19.441152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.772 [2024-11-29 13:12:19.441161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.772 [2024-11-29 13:12:19.449313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.773 [2024-11-29 13:12:19.449332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.773 [2024-11-29 13:12:19.449340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.773 [2024-11-29 13:12:19.461721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.773 [2024-11-29 13:12:19.461741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.773 [2024-11-29 13:12:19.461753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.773 [2024-11-29 13:12:19.474122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.773 [2024-11-29 13:12:19.474142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.773 [2024-11-29 13:12:19.474150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.773 [2024-11-29 13:12:19.484936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.773 [2024-11-29 13:12:19.484963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.773 [2024-11-29 13:12:19.484971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.773 [2024-11-29 13:12:19.493338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.773 [2024-11-29 13:12:19.493358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.773 [2024-11-29 13:12:19.493366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.773 [2024-11-29 13:12:19.506027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.773 [2024-11-29 13:12:19.506047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.773 [2024-11-29 13:12:19.506055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.773 [2024-11-29 13:12:19.519024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.773 [2024-11-29 13:12:19.519044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.773 [2024-11-29 13:12:19.519052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.773 [2024-11-29 13:12:19.530501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.773 [2024-11-29 13:12:19.530520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.773 [2024-11-29 13:12:19.530528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.773 [2024-11-29 13:12:19.543442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.773 [2024-11-29 13:12:19.543461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.773 [2024-11-29 13:12:19.543469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.773 [2024-11-29 13:12:19.552938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.773 [2024-11-29 13:12:19.552964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.773 [2024-11-29 13:12:19.552972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.773 [2024-11-29 13:12:19.565293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.773 [2024-11-29 13:12:19.565316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.773 [2024-11-29 13:12:19.565324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.773 [2024-11-29 13:12:19.577292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.773 [2024-11-29 13:12:19.577310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.773 [2024-11-29 13:12:19.577318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.773 [2024-11-29 13:12:19.586162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:19.773 [2024-11-29 13:12:19.586182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.773 [2024-11-29 13:12:19.586190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.032 [2024-11-29 13:12:19.598281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.032 [2024-11-29 13:12:19.598303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.032 [2024-11-29 13:12:19.598311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.032 [2024-11-29 13:12:19.607959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.032 [2024-11-29 13:12:19.607979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.032 [2024-11-29 13:12:19.607988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.032 [2024-11-29 13:12:19.616482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.032 [2024-11-29 13:12:19.616502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.032 [2024-11-29 13:12:19.616511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.032 [2024-11-29 13:12:19.626612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.032 [2024-11-29 13:12:19.626632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.626639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.636289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.636309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.636317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.645494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.645514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.645523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.655901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.655921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.655929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.668037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.668057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.668066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.676474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.676494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.676502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.689124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.689144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.689151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.699367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.699388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.699396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.711350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.711371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.711379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.720638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.720657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.720665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.733371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.733391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.733399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.746084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.746103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.746115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.757607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.757627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.757635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.770902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.770921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.770929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.780465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.780486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.780494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.789585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.789605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.789613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.798792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.798813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.798821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.809835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.809856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.809864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.821190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.821209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.821218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.829978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.829999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.830007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.033 [2024-11-29 13:12:19.842251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.033 [2024-11-29 13:12:19.842273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.033 [2024-11-29 13:12:19.842282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.293 [2024-11-29 13:12:19.855465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.293 [2024-11-29 13:12:19.855488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.293 [2024-11-29 13:12:19.855497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.293 [2024-11-29 13:12:19.864991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.293 [2024-11-29 13:12:19.865013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.293 [2024-11-29 13:12:19.865021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.293 [2024-11-29 13:12:19.877007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222b6b0) 00:28:20.293 [2024-11-29 13:12:19.877030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.293 [2024-11-29 13:12:19.877039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.293 24391.00 IOPS, 95.28 MiB/s 00:28:20.293 Latency(us) 00:28:20.293 [2024-11-29T12:12:20.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.293 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:20.293 nvme0n1 : 2.00 24408.78 95.35 0.00 0.00 5239.07 2735.42 17780.20 00:28:20.293 [2024-11-29T12:12:20.113Z] =================================================================================================================== 00:28:20.293 [2024-11-29T12:12:20.113Z] Total : 24408.78 95.35 0.00 0.00 5239.07 2735.42 17780.20 00:28:20.293 { 00:28:20.293 "results": [ 00:28:20.293 { 00:28:20.293 "job": "nvme0n1", 00:28:20.293 "core_mask": "0x2", 00:28:20.293 "workload": "randread", 00:28:20.293 "status": "finished", 00:28:20.293 "queue_depth": 128, 00:28:20.293 "io_size": 4096, 00:28:20.293 "runtime": 2.003787, 00:28:20.293 "iops": 24408.781971337274, 00:28:20.293 "mibps": 95.34680457553623, 00:28:20.293 "io_failed": 0, 00:28:20.293 "io_timeout": 0, 00:28:20.293 "avg_latency_us": 5239.070228085303, 00:28:20.293 "min_latency_us": 2735.4156521739133, 00:28:20.293 "max_latency_us": 17780.201739130436 00:28:20.293 } 00:28:20.293 ], 00:28:20.293 "core_count": 1 00:28:20.293 } 00:28:20.293 13:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:20.293 13:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:20.293 13:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:20.293 | .driver_specific 00:28:20.293 | .nvme_error 00:28:20.293 | .status_code 00:28:20.293 | .command_transient_transport_error' 00:28:20.293 13:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:20.552 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 191 > 0 )) 00:28:20.552 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2140523 00:28:20.552 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2140523 ']' 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2140523 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2140523 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2140523' 00:28:20.553 killing process with pid 2140523 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2140523 00:28:20.553 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.553 00:28:20.553 Latency(us) 00:28:20.553 [2024-11-29T12:12:20.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.553 [2024-11-29T12:12:20.373Z] =================================================================================================================== 00:28:20.553 [2024-11-29T12:12:20.373Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2140523 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2141216 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2141216 /var/tmp/bperf.sock 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2141216 ']' 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.553 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.553 [2024-11-29 13:12:20.365472] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:20.553 [2024-11-29 13:12:20.365518] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141216 ] 00:28:20.553 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:20.553 Zero copy mechanism will not be used. 00:28:20.812 [2024-11-29 13:12:20.424455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.812 [2024-11-29 13:12:20.468433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.812 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.812 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:20.812 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.812 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:21.071 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:21.071 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.071 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.071 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.071 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.071 13:12:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.330 nvme0n1 00:28:21.590 13:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:21.590 13:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.590 13:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.590 13:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.590 13:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:21.590 13:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.590 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.590 Zero copy mechanism will not be used. 00:28:21.590 Running I/O for 2 seconds... 00:28:21.590 [2024-11-29 13:12:21.264341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.590 [2024-11-29 13:12:21.264378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.590 [2024-11-29 13:12:21.264389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.590 [2024-11-29 13:12:21.270468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.590 [2024-11-29 13:12:21.270494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.590 [2024-11-29 13:12:21.270503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.590 [2024-11-29 13:12:21.276618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.590 [2024-11-29 13:12:21.276642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.590 [2024-11-29 13:12:21.276650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.590 [2024-11-29 13:12:21.282557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.590 [2024-11-29 13:12:21.282579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.590 [2024-11-29 13:12:21.282592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.590 [2024-11-29 13:12:21.288445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.590 [2024-11-29 13:12:21.288467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.590 [2024-11-29 13:12:21.288476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.590 [2024-11-29 13:12:21.294320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.590 [2024-11-29 13:12:21.294343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.590 [2024-11-29 13:12:21.294351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.590 [2024-11-29 13:12:21.300154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.590 [2024-11-29 13:12:21.300177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.590 [2024-11-29 13:12:21.300185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.590 [2024-11-29 13:12:21.305961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.590 [2024-11-29 13:12:21.305983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.305991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.311742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.311764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.311773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.317395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.317417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.317425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.323175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.323199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.323207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.329042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.329064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.329072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.334856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.334882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.334891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.340719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.340740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.340749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.346737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.346759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.346767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.352747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.352770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.352778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.358507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.358529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.358537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.364353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.364374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.364382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.370131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.370154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.370162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.375996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.376018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.376027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.381769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.381791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.381802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.387612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.387634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.387642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.393407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.393429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.393437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.399174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.399196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.399205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.591 [2024-11-29 13:12:21.405047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.591 [2024-11-29 13:12:21.405070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.591 [2024-11-29 13:12:21.405078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.410890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.410915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.410923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.416865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.416888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.416896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.422712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.422734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.422743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.428449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.428471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.428479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.434050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.434076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.434084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.439609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.439631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.439640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.445132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.445154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.445162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.450625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.450647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.450655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.456527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.456550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.456559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.462488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.462510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.462517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.467992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.468015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.468023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.473741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.473764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.473772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.479440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.479461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.479469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.485081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.485103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.485111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.851 [2024-11-29 13:12:21.490955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.851 [2024-11-29 13:12:21.490978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.851 [2024-11-29 13:12:21.490986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.496811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.496833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.496842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.502597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.502618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.502626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.508179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.508202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.508210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.513851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.513874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.513882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.519399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.519421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.519430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.524973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.524995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.525004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.530918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.530940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.530959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.536828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.536850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.536858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.542618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.542641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.542649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.548386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.548409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.548416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.554063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.554085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.554093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.559690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.559711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.559719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.565200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.565222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.565231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.570967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.570989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.570997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.576729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.576751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.576759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.582599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.582624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.582632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.588326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.588348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.588356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.594208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.594229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.594237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.600150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.600172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.600179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.605888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.605909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.605917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.611732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.611754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.611761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.617555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.617577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.617585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.623318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.623339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.623347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.629055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.629076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.629085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.634848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.634871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.634879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.640616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.640637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.640645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.852 [2024-11-29 13:12:21.646335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.852 [2024-11-29 13:12:21.646356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.852 [2024-11-29 13:12:21.646364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:21.853 [2024-11-29 13:12:21.651897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.853 [2024-11-29 13:12:21.651919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.853 [2024-11-29 13:12:21.651926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:21.853 [2024-11-29 13:12:21.657638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.853 [2024-11-29 13:12:21.657659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.853 [2024-11-29 13:12:21.657667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:21.853 [2024-11-29 13:12:21.663435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.853 [2024-11-29 13:12:21.663456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.853 [2024-11-29 13:12:21.663464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:21.853 [2024-11-29 13:12:21.669205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:21.853 [2024-11-29 13:12:21.669229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.853 [2024-11-29 13:12:21.669237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.112 [2024-11-29 13:12:21.674778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.112 [2024-11-29 13:12:21.674802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.674811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.680854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.680878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.680890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.687960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.687982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.687991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.695483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.695507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.695517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.703073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.703095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.703103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.709296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.709317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.709325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.715360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.715381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.715388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.721375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.721396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.721404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.727365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.727387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.727395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.734223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.734245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.734253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.742353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.742375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.742383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.746986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.747006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.747014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.752703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.752725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.752734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.759458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.759480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.759488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.767492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.767515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.767523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.775578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.775599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.775607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.783045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.783067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.783076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.789658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.789679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.789687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.796327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.796349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.796361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.802779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.802800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.802807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.810190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.810211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.810219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.818323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.818344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.818352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.825553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.825575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.825583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.831095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.831117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.831126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.837045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.837065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.837073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.843130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.843151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.843159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.849096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.113 [2024-11-29 13:12:21.849118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.113 [2024-11-29 13:12:21.849126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.113 [2024-11-29 13:12:21.855081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.114 [2024-11-29 13:12:21.855105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.114 [2024-11-29 13:12:21.855113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.114 [2024-11-29 13:12:21.861219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.114 [2024-11-29 13:12:21.861239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.114 [2024-11-29 13:12:21.861247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.114 [2024-11-29 13:12:21.867434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.114 [2024-11-29 13:12:21.867454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.114 [2024-11-29 13:12:21.867462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.114 [2024-11-29 13:12:21.873553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.114 [2024-11-29 13:12:21.873575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.114 [2024-11-29 13:12:21.873582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.114 [2024-11-29 13:12:21.879791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.114 [2024-11-29 13:12:21.879811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.114 [2024-11-29 13:12:21.879819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.114 [2024-11-29 13:12:21.885850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.114 [2024-11-29 13:12:21.885871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.114 [2024-11-29 13:12:21.885878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.114 [2024-11-29 13:12:21.891824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.114 [2024-11-29 13:12:21.891844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.114 [2024-11-29 13:12:21.891852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.114 [2024-11-29 13:12:21.897730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.114 [2024-11-29 13:12:21.897751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.114 [2024-11-29 13:12:21.897758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.114 [2024-11-29 13:12:21.903827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.114 [2024-11-29 13:12:21.903848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.114 [2024-11-29 13:12:21.903856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.114 [2024-11-29 13:12:21.910245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.114 [2024-11-29 13:12:21.910267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.114 [2024-11-29 13:12:21.910275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.114 [2024-11-29 13:12:21.917002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.114 [2024-11-29 13:12:21.917023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.114 [2024-11-29 13:12:21.917030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.114 [2024-11-29 13:12:21.923155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.114 [2024-11-29 13:12:21.923175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.114 [2024-11-29 13:12:21.923183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.114 [2024-11-29 13:12:21.928715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.114 [2024-11-29 13:12:21.928738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.114 [2024-11-29 13:12:21.928750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:21.934583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:21.934605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:21.934614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:21.940928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:21.940956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:21.940965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:21.946849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:21.946872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:21.946880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:21.953307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:21.953329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:21.953338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:21.959498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:21.959520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:21.959532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:21.965402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:21.965424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:21.965432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:21.971290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:21.971312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:21.971320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:21.977292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:21.977314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:21.977322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:21.983167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:21.983189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:21.983197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:21.988857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:21.988879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:21.988887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:21.994781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:21.994802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:21.994810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:22.000452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:22.000474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:22.000482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:22.004144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:22.004164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:22.004172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:22.008464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:22.008485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:22.008493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:22.013985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:22.014005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:22.014014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:22.019633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:22.019654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:22.019662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:22.025485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:22.025506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:22.025514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:22.031672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:22.031694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:22.031703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:22.037729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:22.037750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:22.037758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:22.043572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:22.043594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:22.043603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:22.049447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:22.049468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:22.049477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.375 [2024-11-29 13:12:22.055353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.375 [2024-11-29 13:12:22.055374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.375 [2024-11-29 13:12:22.055386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.061173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.061194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.061202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.066754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.066775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.066783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.072340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.072361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.072369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.078016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.078036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.078044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.084025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.084045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.084053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.090178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.090199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.090207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.095849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.095871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.095878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.102059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.102080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.102088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.107819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.107843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.107851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.113939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.113968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.113977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.120324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.120344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.120352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.126455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.126476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.126484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.132417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.132437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.132445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.138394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.138414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.138422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.144459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.144480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.144487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.150670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.150691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.150699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.157705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.157725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.157734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.163906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.163928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.163935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.170118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.170138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.170146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.176028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.176048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.176056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.181915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.181935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.181943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.376 [2024-11-29 13:12:22.187846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.376 [2024-11-29 13:12:22.187866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.376 [2024-11-29 13:12:22.187874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.193762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.193784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.193793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.199596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.199618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.199626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.205456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.205478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.205486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.211086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.211107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.211119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.217765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.217787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.217795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.224182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.224204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.224212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.230295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.230316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.230325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.236191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.236213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.236221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.242061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.242081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.242089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.248312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.248333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.248342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.254559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.254580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.254587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.637 5122.00 IOPS, 640.25 MiB/s [2024-11-29T12:12:22.457Z] [2024-11-29 13:12:22.262148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.262170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.262178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.268272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.268293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.268300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.274525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.274546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.274554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.280484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.280505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.280513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.286515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.286537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.286546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.292612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.292633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.292641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.298756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.298777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.298785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.304774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.304794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.304802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.310805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.310825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.310833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.637 [2024-11-29 13:12:22.316839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.637 [2024-11-29 13:12:22.316860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.637 [2024-11-29 13:12:22.316872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.323157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.323179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.323187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.329448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.329470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.329478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.335512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.335532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.335540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.341287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.341308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.341316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.347035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.347055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.347063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.352875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.352897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.352905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.358619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.358641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.358649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.364139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.364161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.364169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.369774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.369799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.369807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.375499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.375520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.375528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.380988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.381009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.381017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.386432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.386453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.386461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.391903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.391923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.391931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.397380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.397401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.397409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.402928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.402957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.402965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.408401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.408422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.408430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.413723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.413744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.413752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.419124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.419144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.419152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.424502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.424522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.424530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.430068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.430089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.430096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.435605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.435626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.435634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.441226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.441247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.441255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.446675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.446695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.446703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.638 [2024-11-29 13:12:22.452068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.638 [2024-11-29 13:12:22.452090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.638 [2024-11-29 13:12:22.452099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.457668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.457690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.457699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.463321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.463342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.463354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.468863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.468885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.468892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.474492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.474513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.474521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.480083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.480103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.480111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.485698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.485719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.485727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.491284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.491306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.491314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.497224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.497246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.497254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.504551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.504573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.504582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.511266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.511288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.511297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.519073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.519098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.519107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.526321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.526342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.526350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.532356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.532377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.532386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.538440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.538461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.538469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.544266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.544287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.544296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.550076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.550096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.550104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.555893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.555914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.555922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.561626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.561646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.899 [2024-11-29 13:12:22.561655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.899 [2024-11-29 13:12:22.567295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.899 [2024-11-29 13:12:22.567316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.567324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.572930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.572957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.572965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.578547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.578568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.578576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.583954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.583974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.583982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.589352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.589373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.589381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.595036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.595058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.595066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.600432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.600453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.600460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.606027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.606048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.606056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.611536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.611556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.611564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.616892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.616913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.616924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.622418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.622439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.622447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.628083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.628103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.628111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.633694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.633715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.633723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.639251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.639272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.639280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.644695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.644716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.644724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.650386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.650407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.650415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.657163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.657185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.657193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.664679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.664700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.664709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.672205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.672226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.672235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.681089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.681111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.681119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.689042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.689064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.689073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.697085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.697108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.697116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.705868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.705891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.705900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:22.900 [2024-11-29 13:12:22.714409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:22.900 [2024-11-29 13:12:22.714432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.900 [2024-11-29 13:12:22.714442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.722724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.722747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.722756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.730170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.730194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.730202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.737799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.737822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.737835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.744874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.744897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.744906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.753571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.753595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.753602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.761882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.761907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.761915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.769320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.769343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.769351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.776817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.776839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.776848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.785317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.785340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.785348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.793850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.793875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.793885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.802143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.802165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.802173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.810371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.810399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.810407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.818976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.818999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.819007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.827366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.827388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.827396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.835240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.835262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.835270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.843144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.843166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.161 [2024-11-29 13:12:22.843175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.161 [2024-11-29 13:12:22.851331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.161 [2024-11-29 13:12:22.851353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.851362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.858210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.858232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.858240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.864407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.864428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.864437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.868399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.868419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.868427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.874419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.874440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.874449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.881885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.881906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.881914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.890247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.890269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.890278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.898601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.898624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.898632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.907421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.907449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.907457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.916134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.916155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.916164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.923913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.923934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.923943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.933828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.933849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.933858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.941314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.941337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.941350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.949648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.949670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.949679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.958694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.958716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.958725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.966667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.966689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.966697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.162 [2024-11-29 13:12:22.974701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.162 [2024-11-29 13:12:22.974724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.162 [2024-11-29 13:12:22.974733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.422 [2024-11-29 13:12:22.983144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.422 [2024-11-29 13:12:22.983166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.422 [2024-11-29 13:12:22.983175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.422 [2024-11-29 13:12:22.991294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.422 [2024-11-29 13:12:22.991317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.422 [2024-11-29 13:12:22.991326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.422 [2024-11-29 13:12:22.998517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.422 [2024-11-29 13:12:22.998538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.422 [2024-11-29 13:12:22.998546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.422 [2024-11-29 13:12:23.004320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.422 [2024-11-29 13:12:23.004341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.422 [2024-11-29 13:12:23.004350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.422 [2024-11-29 13:12:23.010631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.422 [2024-11-29 13:12:23.010657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.422 [2024-11-29 13:12:23.010665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.422 [2024-11-29 13:12:23.016356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.422 [2024-11-29 13:12:23.016378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.422 [2024-11-29 13:12:23.016386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.422 [2024-11-29 13:12:23.022052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.422 [2024-11-29 13:12:23.022073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.422 [2024-11-29 13:12:23.022081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.422 [2024-11-29 13:12:23.027866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.422 [2024-11-29 13:12:23.027888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.422 [2024-11-29 13:12:23.027898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.422 [2024-11-29 13:12:23.033684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.422 [2024-11-29 13:12:23.033704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.422 [2024-11-29 13:12:23.033712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.422 [2024-11-29 13:12:23.039343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.422 [2024-11-29 13:12:23.039364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.422 [2024-11-29 13:12:23.039371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.422 [2024-11-29 13:12:23.045020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.422 [2024-11-29 13:12:23.045041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.422 [2024-11-29 13:12:23.045049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.422 [2024-11-29 13:12:23.050650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.422 [2024-11-29 13:12:23.050671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.422 [2024-11-29 13:12:23.050678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.422 [2024-11-29 13:12:23.056235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.422 [2024-11-29 13:12:23.056256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.056264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.061866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.061887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.061895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.067466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.067487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.067495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.073221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.073242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.073250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.078962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.078984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.078991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.084736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.084757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.084765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.090336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.090358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.090366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.095956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.095976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.095984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.101779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.101800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.101808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.107490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.107512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.107526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.113136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.113156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.113165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.118989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.119010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.119018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.124822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.124843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.124851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.130548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.130569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.130578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.136231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.136253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.136262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.142036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.142057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.142065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.147825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.147845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.147853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.153503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.153523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.153531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.159335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.159356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.159364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.165166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.165188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.165195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.170734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.170756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.170763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.176426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.176445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.176453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.181788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.181808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.181816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.187377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.187398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.187406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.192997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.193017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.193025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.198498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.198519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.198527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.203874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.203895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.423 [2024-11-29 13:12:23.203907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.423 [2024-11-29 13:12:23.209339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.423 [2024-11-29 13:12:23.209359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.424 [2024-11-29 13:12:23.209366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.424 [2024-11-29 13:12:23.214891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.424 [2024-11-29 13:12:23.214911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.424 [2024-11-29 13:12:23.214918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.424 [2024-11-29 13:12:23.220491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.424 [2024-11-29 13:12:23.220511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.424 [2024-11-29 13:12:23.220519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.424 [2024-11-29 13:12:23.226152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.424 [2024-11-29 13:12:23.226173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.424 [2024-11-29 13:12:23.226181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.424 [2024-11-29 13:12:23.231760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.424 [2024-11-29 13:12:23.231780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.424 [2024-11-29 13:12:23.231788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.424 [2024-11-29 13:12:23.237350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.424 [2024-11-29 13:12:23.237372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.424 [2024-11-29 13:12:23.237381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.683 [2024-11-29 13:12:23.243296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.683 [2024-11-29 13:12:23.243319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.683 [2024-11-29 13:12:23.243328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:23.683 [2024-11-29 13:12:23.249120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.683 [2024-11-29 13:12:23.249143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.683 [2024-11-29 13:12:23.249150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.683 [2024-11-29 13:12:23.254778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.683 [2024-11-29 13:12:23.254803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.683 [2024-11-29 13:12:23.254812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:23.683 5000.00 IOPS, 625.00 MiB/s [2024-11-29T12:12:23.503Z] [2024-11-29 13:12:23.261498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dbf1a0) 00:28:23.683 [2024-11-29 13:12:23.261519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.683 [2024-11-29 13:12:23.261527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.683 00:28:23.683 Latency(us) 00:28:23.683 [2024-11-29T12:12:23.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.683 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:23.683 nvme0n1 : 2.00 5002.46 625.31 0.00 0.00 3195.06 683.85 12879.25 00:28:23.683 [2024-11-29T12:12:23.503Z] =================================================================================================================== 00:28:23.683 [2024-11-29T12:12:23.503Z] Total : 5002.46 625.31 0.00 0.00 3195.06 683.85 12879.25 00:28:23.683 { 00:28:23.683 "results": [ 00:28:23.683 { 00:28:23.683 "job": "nvme0n1", 00:28:23.683 "core_mask": "0x2", 00:28:23.683 "workload": "randread", 00:28:23.683 "status": "finished", 00:28:23.683 "queue_depth": 16, 00:28:23.683 "io_size": 131072, 00:28:23.683 "runtime": 2.002213, 00:28:23.683 "iops": 5002.464772728976, 00:28:23.683 "mibps": 625.308096591122, 00:28:23.683 "io_failed": 0, 00:28:23.683 "io_timeout": 0, 00:28:23.683 "avg_latency_us": 3195.06366995416, 00:28:23.683 "min_latency_us": 683.8539130434783, 00:28:23.683 "max_latency_us": 12879.248695652173 00:28:23.683 } 00:28:23.683 ], 00:28:23.683 "core_count": 1 00:28:23.683 } 00:28:23.683 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:23.683 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:23.683 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:23.683 | .driver_specific 00:28:23.683 | .nvme_error 00:28:23.683 | .status_code 00:28:23.683 | .command_transient_transport_error' 00:28:23.683 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:23.683 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 324 > 0 )) 00:28:23.683 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2141216 00:28:23.683 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2141216 ']' 00:28:23.683 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2141216 00:28:23.683 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:23.683 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.683 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2141216 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2141216' 00:28:23.942 killing process with pid 2141216 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2141216 00:28:23.942 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.942 00:28:23.942 Latency(us) 00:28:23.942 [2024-11-29T12:12:23.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.942 [2024-11-29T12:12:23.762Z] =================================================================================================================== 00:28:23.942 [2024-11-29T12:12:23.762Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2141216 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2141687 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2141687 /var/tmp/bperf.sock 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2141687 ']' 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:23.942 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.942 [2024-11-29 13:12:23.722916] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:23.942 [2024-11-29 13:12:23.722970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141687 ] 00:28:24.201 [2024-11-29 13:12:23.780279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.201 [2024-11-29 13:12:23.821985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.201 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.201 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:24.201 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:24.201 13:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:24.459 13:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:24.459 13:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.459 13:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.459 13:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.459 13:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.459 13:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.718 nvme0n1 00:28:24.718 13:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:24.718 13:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.718 13:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:24.718 13:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.718 13:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:24.718 13:12:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:24.977 Running I/O for 2 seconds... 00:28:24.977 [2024-11-29 13:12:24.577917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.977 [2024-11-29 13:12:24.578092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.977 [2024-11-29 13:12:24.578120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.977 [2024-11-29 13:12:24.587819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.977 [2024-11-29 13:12:24.587962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.977 [2024-11-29 13:12:24.587986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.977 [2024-11-29 13:12:24.597671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.977 [2024-11-29 13:12:24.597805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.977 [2024-11-29 13:12:24.597825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.977 [2024-11-29 13:12:24.607433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.977 [2024-11-29 13:12:24.607569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.977 [2024-11-29 13:12:24.607589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.977 [2024-11-29 13:12:24.617173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.977 [2024-11-29 13:12:24.617305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.977 [2024-11-29 13:12:24.617323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.977 [2024-11-29 13:12:24.626907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.977 [2024-11-29 13:12:24.627067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.977 [2024-11-29 13:12:24.627086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.977 [2024-11-29 13:12:24.636589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.977 [2024-11-29 13:12:24.636726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.977 [2024-11-29 13:12:24.636744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.977 [2024-11-29 13:12:24.646297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.977 [2024-11-29 13:12:24.646446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.646465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.656037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.656168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.656186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.665738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.665870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.665888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.675464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.675595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.675613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.685159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.685290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.685307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.694819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.694969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.694987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.704536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.704668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.704686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.714132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.714264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.714281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.723811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.723963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.723981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.733638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.733770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.733789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.743521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.743657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.743677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.753441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.753574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.753591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.763175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.763310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.763328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.773038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.773170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.773189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.782994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.783129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.783147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:24.978 [2024-11-29 13:12:24.792713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:24.978 [2024-11-29 13:12:24.792847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.978 [2024-11-29 13:12:24.792868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.237 [2024-11-29 13:12:24.802718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.237 [2024-11-29 13:12:24.802849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.237 [2024-11-29 13:12:24.802872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.237 [2024-11-29 13:12:24.812436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.237 [2024-11-29 13:12:24.812566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.237 [2024-11-29 13:12:24.812584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.237 [2024-11-29 13:12:24.822091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.237 [2024-11-29 13:12:24.822221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.237 [2024-11-29 13:12:24.822239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.237 [2024-11-29 13:12:24.831758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.237 [2024-11-29 13:12:24.831906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.237 [2024-11-29 13:12:24.831925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.237 [2024-11-29 13:12:24.841734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.237 [2024-11-29 13:12:24.841868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.237 [2024-11-29 13:12:24.841886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.237 [2024-11-29 13:12:24.851554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.237 [2024-11-29 13:12:24.851686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.237 [2024-11-29 13:12:24.851705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.237 [2024-11-29 13:12:24.861432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.237 [2024-11-29 13:12:24.861565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.237 [2024-11-29 13:12:24.861583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.237 [2024-11-29 13:12:24.871095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.237 [2024-11-29 13:12:24.871241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.237 [2024-11-29 13:12:24.871259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.237 [2024-11-29 13:12:24.880772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:24.880903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:24.880920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:24.890434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:24.890572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:24.890589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:24.900118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:24.900277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:24.900295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:24.909777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:24.909907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:24.909924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:24.919459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:24.919608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:24.919625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:24.929140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:24.929290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:24.929308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:24.938847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:24.938992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:24.939009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:24.948537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:24.948666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:24.948683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:24.958252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:24.958383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:24.958400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:24.967955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:24.968106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:24.968124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:24.977609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:24.977740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:24.977757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:24.987261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:24.987409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:24.987427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:24.996971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:24.997102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:24.997120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:25.006584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:25.006716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:25.006734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:25.016291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:25.016421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:25.016438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:25.025910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:25.026045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:25.026063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:25.035548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:25.035696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:25.035714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:25.045256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:25.045385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:25.045403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.238 [2024-11-29 13:12:25.055012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.238 [2024-11-29 13:12:25.055146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.238 [2024-11-29 13:12:25.055170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.064928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.065067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.065086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.074657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.074787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.074805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.084320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.084449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.084482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.094284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.094417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.094434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.104113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.104261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.104279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.113847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.113992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.114009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.123623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.123754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.123771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.133244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.133374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.133391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.142944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.143082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.143100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.152614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.152746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.152763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.162339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.162487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.162504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.172076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.172207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.172224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.181766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.181897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.181914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.191434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.191565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.191582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.201165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.201297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.201313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.210798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.210951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.210969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.220482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.220613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.220633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.230161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.230292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.230310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.239779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.239910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.239927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.249483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.249616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.249633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.259114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.259245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.259263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.268823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.268973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.268991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.278763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.278896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.278914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.288547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.288693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.288711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.298376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.298524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.298541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.497 [2024-11-29 13:12:25.308207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.497 [2024-11-29 13:12:25.308343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.497 [2024-11-29 13:12:25.308360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.755 [2024-11-29 13:12:25.318174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.755 [2024-11-29 13:12:25.318335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.755 [2024-11-29 13:12:25.318354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.755 [2024-11-29 13:12:25.328167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.755 [2024-11-29 13:12:25.328300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.755 [2024-11-29 13:12:25.328319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.755 [2024-11-29 13:12:25.337797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.755 [2024-11-29 13:12:25.337927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.755 [2024-11-29 13:12:25.337966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.347763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.347911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.347929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.357610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.357757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.357775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.367419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.367551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.367568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.377030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.377158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.377176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.386746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.386894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.386911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.396410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.396541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.396559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.406038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.406187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.406204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.415739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.415868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.415885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.425396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.425550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.425568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.435144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.435274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.435291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.444787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.444919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.444936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.454510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.454658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.454675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.464285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.464415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.464432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.474028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.474179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.474210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.483694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.483825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.483841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.493372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.493520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.493538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.503045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.503178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.503195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.512708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.512837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.512854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.522393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.522523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.522540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.532020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.532150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.532167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.541725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.541873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.541890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.551416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.551547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.551564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 [2024-11-29 13:12:25.561067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.561223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.561240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:25.756 26097.00 IOPS, 101.94 MiB/s [2024-11-29T12:12:25.576Z] [2024-11-29 13:12:25.570805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:25.756 [2024-11-29 13:12:25.570939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.756 [2024-11-29 13:12:25.570966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.580800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.580932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.580959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.590629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.590766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.590787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.600558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.600708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.600726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.610488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.610622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.610639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.620463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.620597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.620615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.630358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.630489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.630507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.639984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.640114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.640132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.649591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.649737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.649755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.659283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.659430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.659448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.669062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.669210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.669227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.678774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.678908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.678927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.688367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.688496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.688513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.698019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.698150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.698167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.707671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.707801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.707819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.717448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.717580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.717597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.727114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.727248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.727269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.736852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.737002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.737020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.746548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.746678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.746695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.756167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.756297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.756314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.765837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.765971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.766005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.775534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.775666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.775682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.016 [2024-11-29 13:12:25.785212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.016 [2024-11-29 13:12:25.785342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.016 [2024-11-29 13:12:25.785360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-11-29 13:12:25.794884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.017 [2024-11-29 13:12:25.795044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-11-29 13:12:25.795061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-11-29 13:12:25.804564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.017 [2024-11-29 13:12:25.804696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-11-29 13:12:25.804713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-11-29 13:12:25.814174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.017 [2024-11-29 13:12:25.814304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-11-29 13:12:25.814325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-11-29 13:12:25.823876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.017 [2024-11-29 13:12:25.824031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-11-29 13:12:25.824049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.017 [2024-11-29 13:12:25.833743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.017 [2024-11-29 13:12:25.833890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.017 [2024-11-29 13:12:25.833911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.843615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.843764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.843783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.853542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.853691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.853709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.863328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.863460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.863477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.872928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.873087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.873105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.882633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.882764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.882781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.892346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.892476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.892493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.901998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.902134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.902152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.911633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.911763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.911781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.921321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.921467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.921485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.931026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.931176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.931193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.940684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.940812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.940829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.950351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.950499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.950517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.960066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.960213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.960230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.969662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.969792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.969809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.979292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.979420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.979438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.988917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.989076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.989094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:25.998614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:25.998743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:25.998760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:26.008241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:26.008371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:26.008387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:26.017908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:26.018063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:26.018081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:26.027538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:26.027668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:26.027686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:26.037223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:26.037372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:26.037390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:26.046850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:26.046987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:26.047004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:26.056520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:26.056648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.277 [2024-11-29 13:12:26.056665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.277 [2024-11-29 13:12:26.066273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.277 [2024-11-29 13:12:26.066403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-11-29 13:12:26.066423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-11-29 13:12:26.075941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.278 [2024-11-29 13:12:26.076077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-11-29 13:12:26.076094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.278 [2024-11-29 13:12:26.085704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.278 [2024-11-29 13:12:26.085852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.278 [2024-11-29 13:12:26.085869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-11-29 13:12:26.095612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.538 [2024-11-29 13:12:26.095748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-11-29 13:12:26.095771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-11-29 13:12:26.105522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.538 [2024-11-29 13:12:26.105658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-11-29 13:12:26.105677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-11-29 13:12:26.115225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.538 [2024-11-29 13:12:26.115375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-11-29 13:12:26.115393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-11-29 13:12:26.124887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.538 [2024-11-29 13:12:26.125040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-11-29 13:12:26.125058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-11-29 13:12:26.134582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.538 [2024-11-29 13:12:26.134712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-11-29 13:12:26.134730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-11-29 13:12:26.144199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.538 [2024-11-29 13:12:26.144355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.538 [2024-11-29 13:12:26.144372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.538 [2024-11-29 13:12:26.153882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.154044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.154061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.163596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.163727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.163744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.173233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.173381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.173399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.182929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.183082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.183100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.192608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.192738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.192756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.202329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.202460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.202478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.211956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.212090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.212108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.221614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.221764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.221782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.231278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.231407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.231424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.240922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.241076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.241094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.250605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.250734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.250751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.260212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.260342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.260359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.269954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.270105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.270123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.279593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.279725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.279742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.289246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.289393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.289411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.298929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.299067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.299085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.308592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.308725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.308741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.318189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.318320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.318337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.327989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.328122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.328140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.337648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.337798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.337817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.539 [2024-11-29 13:12:26.347360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.539 [2024-11-29 13:12:26.347488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.539 [2024-11-29 13:12:26.347522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-11-29 13:12:26.357360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.799 [2024-11-29 13:12:26.357499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-11-29 13:12:26.357519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-11-29 13:12:26.367255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.799 [2024-11-29 13:12:26.367390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-11-29 13:12:26.367410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-11-29 13:12:26.377054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.799 [2024-11-29 13:12:26.377189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-11-29 13:12:26.377207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-11-29 13:12:26.386835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.799 [2024-11-29 13:12:26.386988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-11-29 13:12:26.387007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-11-29 13:12:26.396699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.799 [2024-11-29 13:12:26.396828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-11-29 13:12:26.396845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-11-29 13:12:26.406380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.799 [2024-11-29 13:12:26.406527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-11-29 13:12:26.406548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-11-29 13:12:26.416059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.799 [2024-11-29 13:12:26.416192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-11-29 13:12:26.416210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-11-29 13:12:26.425768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.799 [2024-11-29 13:12:26.425917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-11-29 13:12:26.425935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-11-29 13:12:26.435495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.799 [2024-11-29 13:12:26.435626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-11-29 13:12:26.435643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-11-29 13:12:26.445174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.799 [2024-11-29 13:12:26.445307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.799 [2024-11-29 13:12:26.445325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.799 [2024-11-29 13:12:26.454859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.799 [2024-11-29 13:12:26.455010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-11-29 13:12:26.455027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-11-29 13:12:26.464608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.800 [2024-11-29 13:12:26.464765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-11-29 13:12:26.464782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-11-29 13:12:26.474312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.800 [2024-11-29 13:12:26.474443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-11-29 13:12:26.474460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-11-29 13:12:26.483893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.800 [2024-11-29 13:12:26.484047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-11-29 13:12:26.484065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-11-29 13:12:26.493598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.800 [2024-11-29 13:12:26.493733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-11-29 13:12:26.493750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-11-29 13:12:26.503276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.800 [2024-11-29 13:12:26.503407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-11-29 13:12:26.503424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-11-29 13:12:26.512952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.800 [2024-11-29 13:12:26.513112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-11-29 13:12:26.513130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-11-29 13:12:26.522553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.800 [2024-11-29 13:12:26.522685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-11-29 13:12:26.522702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-11-29 13:12:26.532219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.800 [2024-11-29 13:12:26.532368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-11-29 13:12:26.532387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-11-29 13:12:26.541889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.800 [2024-11-29 13:12:26.542025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-11-29 13:12:26.542044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-11-29 13:12:26.551594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.800 [2024-11-29 13:12:26.551724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-11-29 13:12:26.551741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 [2024-11-29 13:12:26.561272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.800 [2024-11-29 13:12:26.561403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-11-29 13:12:26.561421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 26213.00 IOPS, 102.39 MiB/s [2024-11-29T12:12:26.620Z] [2024-11-29 13:12:26.570846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1935180) with pdu=0x200016eff3c8 00:28:26.800 [2024-11-29 13:12:26.570999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.800 [2024-11-29 13:12:26.571016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:26.800 00:28:26.800 Latency(us) 00:28:26.800 [2024-11-29T12:12:26.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.800 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:26.800 nvme0n1 : 2.00 26213.94 102.40 0.00 0.00 4874.40 3632.97 11112.63 00:28:26.800 [2024-11-29T12:12:26.620Z] =================================================================================================================== 00:28:26.800 [2024-11-29T12:12:26.620Z] Total : 26213.94 102.40 0.00 0.00 4874.40 3632.97 11112.63 00:28:26.800 { 00:28:26.800 "results": [ 00:28:26.800 { 00:28:26.800 "job": "nvme0n1", 00:28:26.800 "core_mask": "0x2", 00:28:26.800 "workload": "randwrite", 00:28:26.800 "status": "finished", 00:28:26.800 "queue_depth": 128, 00:28:26.800 "io_size": 4096, 00:28:26.800 "runtime": 2.004811, 00:28:26.800 "iops": 26213.942361649053, 00:28:26.800 "mibps": 102.39821235019161, 00:28:26.800 "io_failed": 0, 00:28:26.800 "io_timeout": 0, 00:28:26.800 "avg_latency_us": 4874.402316193199, 00:28:26.800 "min_latency_us": 3632.973913043478, 00:28:26.800 "max_latency_us": 11112.626086956521 00:28:26.800 } 00:28:26.800 ], 00:28:26.800 "core_count": 1 00:28:26.800 } 00:28:26.800 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:26.800 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:26.800 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:26.800 | .driver_specific 00:28:26.800 | .nvme_error 00:28:26.800 | .status_code 00:28:26.800 | .command_transient_transport_error' 00:28:26.800 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:27.059 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 206 > 0 )) 00:28:27.059 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2141687 00:28:27.059 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2141687 ']' 00:28:27.059 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2141687 00:28:27.059 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:27.059 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:27.059 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2141687 00:28:27.059 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:27.059 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:27.059 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2141687' 00:28:27.059 killing process with pid 2141687 00:28:27.059 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2141687 00:28:27.059 Received shutdown signal, test time was about 2.000000 seconds 00:28:27.059 00:28:27.059 Latency(us) 00:28:27.059 [2024-11-29T12:12:26.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.059 [2024-11-29T12:12:26.879Z] =================================================================================================================== 00:28:27.059 [2024-11-29T12:12:26.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.059 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2141687 00:28:27.318 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:27.318 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:27.318 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:27.318 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:27.318 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:27.318 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2142166 00:28:27.318 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2142166 /var/tmp/bperf.sock 00:28:27.318 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2142166 ']' 00:28:27.318 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:27.318 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:27.318 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.318 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:27.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:27.318 13:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.318 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.318 [2024-11-29 13:12:27.032218] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:27.318 [2024-11-29 13:12:27.032266] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142166 ] 00:28:27.318 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:27.318 Zero copy mechanism will not be used. 00:28:27.318 [2024-11-29 13:12:27.095302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.578 [2024-11-29 13:12:27.139820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.578 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:27.578 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:27.578 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:27.578 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:27.837 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:27.837 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.837 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.837 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.837 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.837 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.095 nvme0n1 00:28:28.095 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:28.095 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.095 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.096 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.096 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:28.096 13:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:28.355 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:28.355 Zero copy mechanism will not be used. 00:28:28.355 Running I/O for 2 seconds... 00:28:28.355 [2024-11-29 13:12:27.987388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.355 [2024-11-29 13:12:27.987473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.355 [2024-11-29 13:12:27.987500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.355 [2024-11-29 13:12:27.993353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.355 [2024-11-29 13:12:27.993476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.355 [2024-11-29 13:12:27.993499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.355 [2024-11-29 13:12:27.999554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.355 [2024-11-29 13:12:27.999719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.355 [2024-11-29 13:12:27.999740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.355 [2024-11-29 13:12:28.006202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.355 [2024-11-29 13:12:28.006356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.355 [2024-11-29 13:12:28.006375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.355 [2024-11-29 13:12:28.013109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.355 [2024-11-29 13:12:28.013256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.355 [2024-11-29 13:12:28.013275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.355 [2024-11-29 13:12:28.019976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.355 [2024-11-29 13:12:28.020126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.355 [2024-11-29 13:12:28.020145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.355 [2024-11-29 13:12:28.026639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.355 [2024-11-29 13:12:28.026771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.026790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.033402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.033558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.033580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.040060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.040247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.040267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.047007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.047154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.047173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.054120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.054293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.054312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.060282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.060405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.060423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.066031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.066141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.066159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.072824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.072945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.072971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.079482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.079604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.079623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.085591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.085720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.085738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.092456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.092568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.092586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.098568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.098716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.098734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.105063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.105213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.105231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.110973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.111074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.111093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.117103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.117181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.117209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.123283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.123341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.123359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.128967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.129042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.129061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.134665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.134739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.134757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.140317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.140452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.140470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.146229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.146292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.146310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.152051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.152119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.152137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.157655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.157750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.157768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.356 [2024-11-29 13:12:28.162814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.356 [2024-11-29 13:12:28.162872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.356 [2024-11-29 13:12:28.162889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.357 [2024-11-29 13:12:28.168795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.357 [2024-11-29 13:12:28.168893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.357 [2024-11-29 13:12:28.168910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.616 [2024-11-29 13:12:28.174555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.616 [2024-11-29 13:12:28.174616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.616 [2024-11-29 13:12:28.174636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.616 [2024-11-29 13:12:28.180574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.616 [2024-11-29 13:12:28.180660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.616 [2024-11-29 13:12:28.180679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.616 [2024-11-29 13:12:28.186446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.616 [2024-11-29 13:12:28.186552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.616 [2024-11-29 13:12:28.186571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.616 [2024-11-29 13:12:28.192926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.616 [2024-11-29 13:12:28.193018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.616 [2024-11-29 13:12:28.193041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.616 [2024-11-29 13:12:28.199578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.616 [2024-11-29 13:12:28.199657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.616 [2024-11-29 13:12:28.199676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.616 [2024-11-29 13:12:28.206160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.616 [2024-11-29 13:12:28.206243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.206262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.213256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.213378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.213395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.220468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.220557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.220576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.227498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.227575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.227593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.234183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.234287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.234305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.240196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.240315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.240333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.246474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.246532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.246551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.252125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.252252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.252269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.258743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.258893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.258911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.265517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.265702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.265719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.272439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.272611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.272629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.279585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.279761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.279778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.286404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.286549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.286567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.293607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.293795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.293812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.301189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.301353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.301372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.308395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.308519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.308536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.314185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.314294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.314312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.320453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.320557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.320575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.326697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.326819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.326838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.333688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.333797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.333815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.339784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.339865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.339883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.344888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.617 [2024-11-29 13:12:28.344961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.617 [2024-11-29 13:12:28.344978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.617 [2024-11-29 13:12:28.350019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.350093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.350111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.355136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.355233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.355250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.359817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.359890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.359915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.364918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.364980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.364998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.370100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.370193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.370211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.374880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.374982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.375000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.379408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.379492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.379510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.384088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.384161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.384179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.388582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.388645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.388662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.393051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.393123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.393140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.397872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.397939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.397962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.403652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.403781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.403799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.409110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.409181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.409198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.414387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.414448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.414466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.420667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.420748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.420765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.426180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.426243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.426261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.618 [2024-11-29 13:12:28.432280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.618 [2024-11-29 13:12:28.432355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.618 [2024-11-29 13:12:28.432380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.878 [2024-11-29 13:12:28.438749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.878 [2024-11-29 13:12:28.438870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.878 [2024-11-29 13:12:28.438890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.878 [2024-11-29 13:12:28.446198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.878 [2024-11-29 13:12:28.446368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.878 [2024-11-29 13:12:28.446388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.878 [2024-11-29 13:12:28.453631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.878 [2024-11-29 13:12:28.453768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.878 [2024-11-29 13:12:28.453786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.878 [2024-11-29 13:12:28.461567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.878 [2024-11-29 13:12:28.461694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.878 [2024-11-29 13:12:28.461712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.878 [2024-11-29 13:12:28.468838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.878 [2024-11-29 13:12:28.468987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.878 [2024-11-29 13:12:28.469005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.878 [2024-11-29 13:12:28.476057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.878 [2024-11-29 13:12:28.476212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.878 [2024-11-29 13:12:28.476230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.878 [2024-11-29 13:12:28.483431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.878 [2024-11-29 13:12:28.483583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.878 [2024-11-29 13:12:28.483601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.878 [2024-11-29 13:12:28.490982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.491100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.491118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.498078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.498238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.498257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.505432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.505580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.505599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.513522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.513697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.513715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.521181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.521341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.521364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.529102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.529293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.529311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.536633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.536734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.536752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.542361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.542442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.542460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.547034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.547116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.547134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.551759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.551820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.551838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.556542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.556600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.556618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.561083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.561145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.561162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.565569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.565643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.565661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.570182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.570268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.570287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.574765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.574858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.574876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.579280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.579341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.579359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.583754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.583831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.583849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.588224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.588299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.588317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.592885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.592965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.592999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.597501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.597571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.597589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.602191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.602262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.602279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.606743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.606814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.606832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.611127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.611198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.611216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.615699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.615765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.615783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.620345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.620412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.620430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.624932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.625017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.625035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.629375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.629452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.879 [2024-11-29 13:12:28.629470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.879 [2024-11-29 13:12:28.633825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.879 [2024-11-29 13:12:28.633896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.880 [2024-11-29 13:12:28.633914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.880 [2024-11-29 13:12:28.638261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.880 [2024-11-29 13:12:28.638334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.880 [2024-11-29 13:12:28.638352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.880 [2024-11-29 13:12:28.642799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.880 [2024-11-29 13:12:28.642874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.880 [2024-11-29 13:12:28.642893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.880 [2024-11-29 13:12:28.647644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.880 [2024-11-29 13:12:28.647715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.880 [2024-11-29 13:12:28.647737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.880 [2024-11-29 13:12:28.652231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.880 [2024-11-29 13:12:28.652302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.880 [2024-11-29 13:12:28.652320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.880 [2024-11-29 13:12:28.657347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.880 [2024-11-29 13:12:28.657410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.880 [2024-11-29 13:12:28.657428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.880 [2024-11-29 13:12:28.662029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.880 [2024-11-29 13:12:28.662136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.880 [2024-11-29 13:12:28.662154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.880 [2024-11-29 13:12:28.667589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.880 [2024-11-29 13:12:28.667706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.880 [2024-11-29 13:12:28.667724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:28.880 [2024-11-29 13:12:28.674215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.880 [2024-11-29 13:12:28.674273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.880 [2024-11-29 13:12:28.674291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:28.880 [2024-11-29 13:12:28.680465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.880 [2024-11-29 13:12:28.680593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.880 [2024-11-29 13:12:28.680610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:28.880 [2024-11-29 13:12:28.688001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.880 [2024-11-29 13:12:28.688114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.880 [2024-11-29 13:12:28.688132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:28.880 [2024-11-29 13:12:28.696127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:28.880 [2024-11-29 13:12:28.696321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.880 [2024-11-29 13:12:28.696341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.704190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.704304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.704324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.712382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.712526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.712544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.720791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.720975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.720994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.728492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.728584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.728602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.736330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.736445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.736462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.743733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.743865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.743883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.752820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.752919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.752937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.761435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.761647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.761668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.769705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.769821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.769840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.777784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.778004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.778024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.784524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.784600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.784618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.789697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.789773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.789790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.794450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.794527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.794545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.799244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.799312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.799330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.803899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.803970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.803988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.808548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.808610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.808628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.813205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.813281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.813299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.817841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.817921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.817943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.822548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.822637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.822656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.827218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.827289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.140 [2024-11-29 13:12:28.827306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.140 [2024-11-29 13:12:28.831829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.140 [2024-11-29 13:12:28.831889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.831907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.836450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.836523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.836541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.841092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.841171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.841189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.845747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.845831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.845848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.850444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.850516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.850534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.855071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.855149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.855166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.859666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.859754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.859772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.864302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.864362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.864379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.868966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.869042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.869060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.873666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.873738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.873757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.878314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.878375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.878392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.882904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.882986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.883004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.887557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.887627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.887644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.892245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.892337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.892355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.898005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.898188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.898205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.904225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.904367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.904385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.910937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.911075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.911092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.917240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.917351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.917368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.922618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.922698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.922715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.927331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.927394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.927411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.932061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.932138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.932156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.936733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.936807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.936825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.941408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.941476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.941495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.946058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.946132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.946154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.950701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.950772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.950789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.141 [2024-11-29 13:12:28.955496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.141 [2024-11-29 13:12:28.955585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.141 [2024-11-29 13:12:28.955604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.401 [2024-11-29 13:12:28.960324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.401 [2024-11-29 13:12:28.960385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:28.960404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:28.964989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:28.965082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:28.965101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:28.969627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:28.969698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:28.969717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:28.974255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:28.974339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:28.974358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:28.978874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:28.978979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:28.978996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.402 5287.00 IOPS, 660.88 MiB/s [2024-11-29T12:12:29.222Z] [2024-11-29 13:12:28.984422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:28.984494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:28.984513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:28.989023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:28.989085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:28.989103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:28.993698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:28.993771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:28.993789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:28.998395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:28.998456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:28.998474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.003297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.003368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.003386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.008192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.008251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.008269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.013071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.013151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.013170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.018177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.018270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.018288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.023711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.023769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.023787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.029693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.029769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.029787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.035574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.035635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.035654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.041250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.041330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.041347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.046409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.046481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.046499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.051252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.051319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.051336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.056118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.056178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.056195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.061220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.061281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.061299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.066245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.066319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.066336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.071305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.071385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.071402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.076295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.076377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.076398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.081357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.081431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.081449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.086449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.086535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.086553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.091507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.091571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.402 [2024-11-29 13:12:29.091589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.402 [2024-11-29 13:12:29.096910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.402 [2024-11-29 13:12:29.096978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.097012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.101844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.101915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.101933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.106594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.106671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.106689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.111316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.111396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.111413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.115975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.116043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.116061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.120609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.120673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.120691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.125287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.125349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.125368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.129937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.130028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.130046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.134628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.134689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.134723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.139293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.139369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.139387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.143894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.143959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.143976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.148514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.148600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.148619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.153189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.153259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.153277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.157842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.157911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.157929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.162518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.162596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.162614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.167159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.167237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.167255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.171807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.171886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.171903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.176440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.176513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.176530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.181044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.181129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.181147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.185979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.186060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.186079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.190813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.190891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.190908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.195827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.195902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.195919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.201072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.201166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.201187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.206914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.206988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.207006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.212856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.212928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.212945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.403 [2024-11-29 13:12:29.218210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.403 [2024-11-29 13:12:29.218288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.403 [2024-11-29 13:12:29.218307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.223896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.223966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.223986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.229643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.229809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.229828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.235377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.235475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.235493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.241078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.241139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.241157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.247238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.247370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.247388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.253029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.253089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.253107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.259047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.259106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.259125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.264638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.264701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.264719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.270829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.270907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.270925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.276651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.276724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.276742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.282375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.282447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.282465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.288237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.288318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.288336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.293971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.294033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.294051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.299796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.299869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.299891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.305413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.305479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.305496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.310685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.310749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.310767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.316436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.316498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.316516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.322191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.322371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.322389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.327834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.327908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.664 [2024-11-29 13:12:29.327927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.664 [2024-11-29 13:12:29.333367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.664 [2024-11-29 13:12:29.333427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.333445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.338542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.338622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.338640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.343446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.343521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.343539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.348780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.348873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.348895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.354396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.354456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.354474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.359145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.359218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.359236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.363684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.363770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.363788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.368428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.368490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.368508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.373452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.373511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.373529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.378268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.378343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.378360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.382997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.383071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.383089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.387556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.387647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.387665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.392510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.392574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.392592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.397504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.397625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.397642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.402989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.403063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.403080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.408065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.408153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.408172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.412913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.412982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.413000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.417778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.417851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.417869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.422839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.422929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.422953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.427443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.427501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.427519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.431891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.431984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.432002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.436715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.665 [2024-11-29 13:12:29.436787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.665 [2024-11-29 13:12:29.436805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.665 [2024-11-29 13:12:29.441541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.666 [2024-11-29 13:12:29.441615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.666 [2024-11-29 13:12:29.441633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.666 [2024-11-29 13:12:29.447007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.666 [2024-11-29 13:12:29.447136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.666 [2024-11-29 13:12:29.447153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.666 [2024-11-29 13:12:29.452692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.666 [2024-11-29 13:12:29.452771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.666 [2024-11-29 13:12:29.452789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.666 [2024-11-29 13:12:29.458026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.666 [2024-11-29 13:12:29.458112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.666 [2024-11-29 13:12:29.458130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.666 [2024-11-29 13:12:29.462880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.666 [2024-11-29 13:12:29.462973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.666 [2024-11-29 13:12:29.462991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.666 [2024-11-29 13:12:29.467657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.666 [2024-11-29 13:12:29.467726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.666 [2024-11-29 13:12:29.467743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.666 [2024-11-29 13:12:29.472162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.666 [2024-11-29 13:12:29.472238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.666 [2024-11-29 13:12:29.472256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.666 [2024-11-29 13:12:29.476805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.666 [2024-11-29 13:12:29.476880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.666 [2024-11-29 13:12:29.476901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.666 [2024-11-29 13:12:29.481604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.666 [2024-11-29 13:12:29.481685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.666 [2024-11-29 13:12:29.481705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.926 [2024-11-29 13:12:29.486429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.926 [2024-11-29 13:12:29.486497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.926 [2024-11-29 13:12:29.486516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.926 [2024-11-29 13:12:29.491243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.926 [2024-11-29 13:12:29.491316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.926 [2024-11-29 13:12:29.491336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.926 [2024-11-29 13:12:29.495852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.926 [2024-11-29 13:12:29.495931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.926 [2024-11-29 13:12:29.495956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.926 [2024-11-29 13:12:29.500550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.926 [2024-11-29 13:12:29.500644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.926 [2024-11-29 13:12:29.500663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.926 [2024-11-29 13:12:29.505750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.926 [2024-11-29 13:12:29.505824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.505842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.510716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.510789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.510807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.515490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.515550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.515570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.520223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.520299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.520317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.524871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.524958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.524976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.529697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.529775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.529793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.534461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.534533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.534551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.539253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.539332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.539349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.544006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.544093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.544121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.548746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.548829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.548847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.553388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.553457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.553476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.558296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.558414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.558432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.563719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.563790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.563808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.568169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.568233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.568251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.572554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.572626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.572644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.577003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.577082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.577099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.581336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.581404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.581422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.585683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.585756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.585775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.590073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.590142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.590160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.594465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.594535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.594553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.598866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.598959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.598997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.603739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.603813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.603831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.608249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.608321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.608340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.612627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.612694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.612712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.617005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.617086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.617104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.621419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.621504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.621522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.625850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.625925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.927 [2024-11-29 13:12:29.625943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.927 [2024-11-29 13:12:29.630259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.927 [2024-11-29 13:12:29.630322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.630340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.634643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.634727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.634745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.639059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.639136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.639155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.643447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.643536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.643554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.647940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.648023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.648041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.652832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.652892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.652910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.658516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.658595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.658613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.663821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.663915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.663933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.668610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.668688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.668706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.673649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.673769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.673787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.678565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.678626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.678644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.683380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.683456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.683474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.688413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.688502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.688519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.693955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.694031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.694049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.699978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.700098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.700115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.707022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.707148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.707166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.713621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.713748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.713767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.719167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.719299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.719316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.724178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.724305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.724322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.729182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.729260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.729283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.733649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.733741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.733759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.738125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.738195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.738213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.928 [2024-11-29 13:12:29.742706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:29.928 [2024-11-29 13:12:29.742783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.928 [2024-11-29 13:12:29.742803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.747497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.747616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.747635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.752617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.752703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.752722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.757716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.757808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.757827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.762928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.763012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.763031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.768126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.768210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.768229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.773432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.773632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.773651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.778584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.778727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.778745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.783843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.783989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.784007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.789037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.789148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.789164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.794361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.794446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.794465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.799438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.799548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.799566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.805214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.805367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.805385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.811362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.811479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.811497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.816678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.816763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.816780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.821496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.821600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.821618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.827267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.827403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.827421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.833250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.833401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.833419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.839943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.840072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.840090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.846582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.846752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.846769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.852566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.852678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.852695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.857616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.857687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.857705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.862182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.862281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.862299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.866694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.866758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.866779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.871006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.189 [2024-11-29 13:12:29.871088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.189 [2024-11-29 13:12:29.871106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.189 [2024-11-29 13:12:29.875383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.875456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.875474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.879760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.879827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.879845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.884263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.884346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.884364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.888977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.889063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.889082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.893466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.893554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.893572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.898301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.898361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.898378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.903479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.903542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.903560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.909132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.909205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.909224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.913961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.914060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.914077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.918737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.918797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.918816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.923454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.923516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.923534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.927997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.928064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.928083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.932709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.932809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.932827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.938184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.938285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.938303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.943594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.943721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.943740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.949087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.949162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.949181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.953877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.953962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.953982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.958611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.958689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.958707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.963342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.963420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.963438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.967893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.967980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.967998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.973092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.973163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.973181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:30.190 [2024-11-29 13:12:29.978524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.978610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.978628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:30.190 5719.50 IOPS, 714.94 MiB/s [2024-11-29T12:12:30.010Z] [2024-11-29 13:12:29.985612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19354c0) with pdu=0x200016eff3c8 00:28:30.190 [2024-11-29 13:12:29.985695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.190 [2024-11-29 13:12:29.985714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:30.190 00:28:30.190 Latency(us) 00:28:30.190 [2024-11-29T12:12:30.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.190 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:30.190 nvme0n1 : 2.00 5716.90 714.61 0.00 0.00 2794.37 1894.85 12081.42 00:28:30.190 [2024-11-29T12:12:30.010Z] =================================================================================================================== 00:28:30.190 [2024-11-29T12:12:30.010Z] Total : 5716.90 714.61 0.00 0.00 2794.37 1894.85 12081.42 00:28:30.190 { 00:28:30.190 "results": [ 00:28:30.190 { 00:28:30.190 "job": "nvme0n1", 00:28:30.190 "core_mask": "0x2", 00:28:30.190 "workload": "randwrite", 00:28:30.190 "status": "finished", 00:28:30.190 "queue_depth": 16, 00:28:30.190 "io_size": 131072, 00:28:30.190 "runtime": 2.003882, 00:28:30.190 "iops": 5716.903490325279, 00:28:30.190 "mibps": 714.6129362906598, 00:28:30.190 "io_failed": 0, 00:28:30.190 "io_timeout": 0, 00:28:30.190 "avg_latency_us": 2794.3659169298035, 00:28:30.190 "min_latency_us": 1894.8452173913045, 00:28:30.190 "max_latency_us": 12081.419130434782 00:28:30.190 } 00:28:30.190 ], 00:28:30.190 "core_count": 1 00:28:30.190 } 00:28:30.190 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:30.190 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:30.191 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:30.191 | .driver_specific 00:28:30.191 | .nvme_error 00:28:30.191 | .status_code 00:28:30.191 | .command_transient_transport_error' 00:28:30.191 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:30.448 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 370 > 0 )) 00:28:30.448 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2142166 00:28:30.448 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2142166 ']' 00:28:30.448 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2142166 00:28:30.448 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:30.448 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.448 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2142166 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2142166' 00:28:30.706 killing process with pid 2142166 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2142166 00:28:30.706 Received shutdown signal, test time was about 2.000000 seconds 00:28:30.706 00:28:30.706 Latency(us) 00:28:30.706 [2024-11-29T12:12:30.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.706 [2024-11-29T12:12:30.526Z] =================================================================================================================== 00:28:30.706 [2024-11-29T12:12:30.526Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2142166 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2140501 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2140501 ']' 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2140501 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2140501 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2140501' 00:28:30.706 killing process with pid 2140501 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2140501 00:28:30.706 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2140501 00:28:30.965 00:28:30.965 real 0m14.107s 00:28:30.965 user 0m27.126s 00:28:30.965 sys 0m4.368s 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.965 ************************************ 00:28:30.965 END TEST nvmf_digest_error 00:28:30.965 ************************************ 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.965 rmmod nvme_tcp 00:28:30.965 rmmod nvme_fabrics 00:28:30.965 rmmod nvme_keyring 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2140501 ']' 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2140501 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2140501 ']' 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2140501 00:28:30.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2140501) - No such process 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2140501 is not found' 00:28:30.965 Process with pid 2140501 is not found 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.965 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.497 13:12:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:33.497 00:28:33.497 real 0m35.822s 00:28:33.497 user 0m55.418s 00:28:33.497 sys 0m13.003s 00:28:33.497 13:12:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.497 13:12:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:33.497 ************************************ 00:28:33.497 END TEST nvmf_digest 00:28:33.497 ************************************ 00:28:33.497 13:12:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:33.497 13:12:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:33.497 13:12:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:33.497 13:12:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:33.497 13:12:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:33.497 13:12:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.497 13:12:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.497 ************************************ 00:28:33.497 START TEST nvmf_bdevperf 00:28:33.497 ************************************ 00:28:33.497 13:12:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:33.497 * Looking for test storage... 00:28:33.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.498 13:12:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:33.498 13:12:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:28:33.498 13:12:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:33.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.498 --rc genhtml_branch_coverage=1 00:28:33.498 --rc genhtml_function_coverage=1 00:28:33.498 --rc genhtml_legend=1 00:28:33.498 --rc geninfo_all_blocks=1 00:28:33.498 --rc geninfo_unexecuted_blocks=1 00:28:33.498 00:28:33.498 ' 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:33.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.498 --rc genhtml_branch_coverage=1 00:28:33.498 --rc genhtml_function_coverage=1 00:28:33.498 --rc genhtml_legend=1 00:28:33.498 --rc geninfo_all_blocks=1 00:28:33.498 --rc geninfo_unexecuted_blocks=1 00:28:33.498 00:28:33.498 ' 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:33.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.498 --rc genhtml_branch_coverage=1 00:28:33.498 --rc genhtml_function_coverage=1 00:28:33.498 --rc genhtml_legend=1 00:28:33.498 --rc geninfo_all_blocks=1 00:28:33.498 --rc geninfo_unexecuted_blocks=1 00:28:33.498 00:28:33.498 ' 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:33.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.498 --rc genhtml_branch_coverage=1 00:28:33.498 --rc genhtml_function_coverage=1 00:28:33.498 --rc genhtml_legend=1 00:28:33.498 --rc geninfo_all_blocks=1 00:28:33.498 --rc geninfo_unexecuted_blocks=1 00:28:33.498 00:28:33.498 ' 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:33.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:33.498 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:33.499 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.499 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.499 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.499 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.499 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.499 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.499 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.499 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.499 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.499 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.499 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.499 13:12:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:38.766 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:38.766 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:38.766 Found net devices under 0000:86:00.0: cvl_0_0 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:38.766 Found net devices under 0000:86:00.1: cvl_0_1 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.766 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:38.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:28:38.767 00:28:38.767 --- 10.0.0.2 ping statistics --- 00:28:38.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.767 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:28:38.767 00:28:38.767 --- 10.0.0.1 ping statistics --- 00:28:38.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.767 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2146186 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2146186 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2146186 ']' 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:38.767 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.767 [2024-11-29 13:12:38.470165] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:38.767 [2024-11-29 13:12:38.470214] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.767 [2024-11-29 13:12:38.537652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:38.767 [2024-11-29 13:12:38.581015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.767 [2024-11-29 13:12:38.581052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.767 [2024-11-29 13:12:38.581060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.767 [2024-11-29 13:12:38.581066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.767 [2024-11-29 13:12:38.581072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.767 [2024-11-29 13:12:38.582477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:38.767 [2024-11-29 13:12:38.582542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:38.767 [2024-11-29 13:12:38.582544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.026 [2024-11-29 13:12:38.720988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.026 Malloc0 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.026 [2024-11-29 13:12:38.780704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:39.026 { 00:28:39.026 "params": { 00:28:39.026 "name": "Nvme$subsystem", 00:28:39.026 "trtype": "$TEST_TRANSPORT", 00:28:39.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.026 "adrfam": "ipv4", 00:28:39.026 "trsvcid": "$NVMF_PORT", 00:28:39.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.026 "hdgst": ${hdgst:-false}, 00:28:39.026 "ddgst": ${ddgst:-false} 00:28:39.026 }, 00:28:39.026 "method": "bdev_nvme_attach_controller" 00:28:39.026 } 00:28:39.026 EOF 00:28:39.026 )") 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:39.026 13:12:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:39.026 "params": { 00:28:39.026 "name": "Nvme1", 00:28:39.027 "trtype": "tcp", 00:28:39.027 "traddr": "10.0.0.2", 00:28:39.027 "adrfam": "ipv4", 00:28:39.027 "trsvcid": "4420", 00:28:39.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:39.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:39.027 "hdgst": false, 00:28:39.027 "ddgst": false 00:28:39.027 }, 00:28:39.027 "method": "bdev_nvme_attach_controller" 00:28:39.027 }' 00:28:39.027 [2024-11-29 13:12:38.832742] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:39.027 [2024-11-29 13:12:38.832785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146408 ] 00:28:39.285 [2024-11-29 13:12:38.895087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.285 [2024-11-29 13:12:38.936669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.545 Running I/O for 1 seconds... 00:28:40.481 10598.00 IOPS, 41.40 MiB/s 00:28:40.481 Latency(us) 00:28:40.481 [2024-11-29T12:12:40.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.481 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:40.481 Verification LBA range: start 0x0 length 0x4000 00:28:40.481 Nvme1n1 : 1.00 10688.99 41.75 0.00 0.00 11932.60 1488.81 16412.49 00:28:40.481 [2024-11-29T12:12:40.301Z] =================================================================================================================== 00:28:40.481 [2024-11-29T12:12:40.301Z] Total : 10688.99 41.75 0.00 0.00 11932.60 1488.81 16412.49 00:28:40.741 13:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2146641 00:28:40.741 13:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:40.741 13:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:40.741 13:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:40.741 13:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:40.741 13:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:40.741 13:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:40.741 13:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:40.741 { 00:28:40.741 "params": { 00:28:40.741 "name": "Nvme$subsystem", 00:28:40.741 "trtype": "$TEST_TRANSPORT", 00:28:40.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.741 "adrfam": "ipv4", 00:28:40.741 "trsvcid": "$NVMF_PORT", 00:28:40.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.741 "hdgst": ${hdgst:-false}, 00:28:40.741 "ddgst": ${ddgst:-false} 00:28:40.741 }, 00:28:40.741 "method": "bdev_nvme_attach_controller" 00:28:40.741 } 00:28:40.741 EOF 00:28:40.741 )") 00:28:40.741 13:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:40.741 13:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:40.741 13:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:40.741 13:12:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:40.741 "params": { 00:28:40.741 "name": "Nvme1", 00:28:40.741 "trtype": "tcp", 00:28:40.741 "traddr": "10.0.0.2", 00:28:40.741 "adrfam": "ipv4", 00:28:40.741 "trsvcid": "4420", 00:28:40.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:40.741 "hdgst": false, 00:28:40.741 "ddgst": false 00:28:40.741 }, 00:28:40.741 "method": "bdev_nvme_attach_controller" 00:28:40.741 }' 00:28:40.741 [2024-11-29 13:12:40.434165] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:40.741 [2024-11-29 13:12:40.434215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146641 ] 00:28:40.741 [2024-11-29 13:12:40.497801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.741 [2024-11-29 13:12:40.536659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.460 Running I/O for 15 seconds... 00:28:43.359 10857.00 IOPS, 42.41 MiB/s [2024-11-29T12:12:43.441Z] 10848.00 IOPS, 42.38 MiB/s [2024-11-29T12:12:43.441Z] 13:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2146186 00:28:43.621 13:12:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:43.621 [2024-11-29 13:12:43.403754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.403793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.403810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.403819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.403829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.403837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.403847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.403854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.403865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.403878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.403886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.403894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.403904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.403911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.403920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.403926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.403935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.403943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.404074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.404082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.404093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.404100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.404110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.404118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.404128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.404137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.404146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.404154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.404163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.404170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.621 [2024-11-29 13:12:43.404179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.621 [2024-11-29 13:12:43.404186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.622 [2024-11-29 13:12:43.404771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.622 [2024-11-29 13:12:43.404778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.404991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.404999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.623 [2024-11-29 13:12:43.405122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.623 [2024-11-29 13:12:43.405136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.623 [2024-11-29 13:12:43.405367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.623 [2024-11-29 13:12:43.405373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.624 [2024-11-29 13:12:43.405819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.624 [2024-11-29 13:12:43.405835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.405842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce16c0 is same with the state(6) to be set 00:28:43.624 [2024-11-29 13:12:43.405850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:43.624 [2024-11-29 13:12:43.405856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:43.624 [2024-11-29 13:12:43.405862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84040 len:8 PRP1 0x0 PRP2 0x0 00:28:43.624 [2024-11-29 13:12:43.405872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:43.624 [2024-11-29 13:12:43.408758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.624 [2024-11-29 13:12:43.408813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.624 [2024-11-29 13:12:43.409431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.624 [2024-11-29 13:12:43.409449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.624 [2024-11-29 13:12:43.409461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.624 [2024-11-29 13:12:43.409641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.624 [2024-11-29 13:12:43.409820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.624 [2024-11-29 13:12:43.409829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.624 [2024-11-29 13:12:43.409837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.624 [2024-11-29 13:12:43.409845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.624 [2024-11-29 13:12:43.422097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.624 [2024-11-29 13:12:43.422556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.624 [2024-11-29 13:12:43.422603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.624 [2024-11-29 13:12:43.422627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.624 [2024-11-29 13:12:43.423198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.625 [2024-11-29 13:12:43.423378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.625 [2024-11-29 13:12:43.423387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.625 [2024-11-29 13:12:43.423394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.625 [2024-11-29 13:12:43.423401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.625 [2024-11-29 13:12:43.435129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.625 [2024-11-29 13:12:43.435560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.625 [2024-11-29 13:12:43.435578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.625 [2024-11-29 13:12:43.435586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.625 [2024-11-29 13:12:43.435765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.625 [2024-11-29 13:12:43.435944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.625 [2024-11-29 13:12:43.435961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.625 [2024-11-29 13:12:43.435968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.625 [2024-11-29 13:12:43.435975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.884 [2024-11-29 13:12:43.448255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.885 [2024-11-29 13:12:43.448710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.885 [2024-11-29 13:12:43.448756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.885 [2024-11-29 13:12:43.448780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.885 [2024-11-29 13:12:43.449380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.885 [2024-11-29 13:12:43.449573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.885 [2024-11-29 13:12:43.449582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.885 [2024-11-29 13:12:43.449588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.885 [2024-11-29 13:12:43.449594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.885 [2024-11-29 13:12:43.461229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.885 [2024-11-29 13:12:43.461701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.885 [2024-11-29 13:12:43.461746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.885 [2024-11-29 13:12:43.461770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.885 [2024-11-29 13:12:43.462368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.885 [2024-11-29 13:12:43.462801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.885 [2024-11-29 13:12:43.462809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.885 [2024-11-29 13:12:43.462816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.885 [2024-11-29 13:12:43.462822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.885 [2024-11-29 13:12:43.474248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.885 [2024-11-29 13:12:43.474638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.885 [2024-11-29 13:12:43.474684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.885 [2024-11-29 13:12:43.474708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.885 [2024-11-29 13:12:43.475305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.885 [2024-11-29 13:12:43.475760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.885 [2024-11-29 13:12:43.475768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.885 [2024-11-29 13:12:43.475775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.885 [2024-11-29 13:12:43.475781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.885 [2024-11-29 13:12:43.487240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.885 [2024-11-29 13:12:43.487584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.885 [2024-11-29 13:12:43.487602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.885 [2024-11-29 13:12:43.487609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.885 [2024-11-29 13:12:43.487782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.885 [2024-11-29 13:12:43.487963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.885 [2024-11-29 13:12:43.487972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.885 [2024-11-29 13:12:43.487978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.885 [2024-11-29 13:12:43.487988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.885 [2024-11-29 13:12:43.500190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.885 [2024-11-29 13:12:43.500633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.885 [2024-11-29 13:12:43.500674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.885 [2024-11-29 13:12:43.500699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.885 [2024-11-29 13:12:43.501299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.885 [2024-11-29 13:12:43.501573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.885 [2024-11-29 13:12:43.501581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.885 [2024-11-29 13:12:43.501587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.885 [2024-11-29 13:12:43.501594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.885 [2024-11-29 13:12:43.513110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.885 [2024-11-29 13:12:43.513567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.885 [2024-11-29 13:12:43.513583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.885 [2024-11-29 13:12:43.513590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.885 [2024-11-29 13:12:43.513763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.885 [2024-11-29 13:12:43.513935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.885 [2024-11-29 13:12:43.513943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.885 [2024-11-29 13:12:43.513957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.885 [2024-11-29 13:12:43.513963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.885 [2024-11-29 13:12:43.526008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.885 [2024-11-29 13:12:43.526439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.885 [2024-11-29 13:12:43.526483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.885 [2024-11-29 13:12:43.526506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.885 [2024-11-29 13:12:43.527104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.885 [2024-11-29 13:12:43.527651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.885 [2024-11-29 13:12:43.527659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.885 [2024-11-29 13:12:43.527665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.885 [2024-11-29 13:12:43.527672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.885 [2024-11-29 13:12:43.538962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.885 [2024-11-29 13:12:43.539399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.885 [2024-11-29 13:12:43.539415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.885 [2024-11-29 13:12:43.539422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.885 [2024-11-29 13:12:43.539586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.885 [2024-11-29 13:12:43.539749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.885 [2024-11-29 13:12:43.539757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.886 [2024-11-29 13:12:43.539763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.886 [2024-11-29 13:12:43.539769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.886 [2024-11-29 13:12:43.551820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.886 [2024-11-29 13:12:43.552253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.886 [2024-11-29 13:12:43.552270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.886 [2024-11-29 13:12:43.552277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.886 [2024-11-29 13:12:43.552450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.886 [2024-11-29 13:12:43.552623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.886 [2024-11-29 13:12:43.552630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.886 [2024-11-29 13:12:43.552637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.886 [2024-11-29 13:12:43.552643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.886 [2024-11-29 13:12:43.564760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.886 [2024-11-29 13:12:43.565190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.886 [2024-11-29 13:12:43.565207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.886 [2024-11-29 13:12:43.565215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.886 [2024-11-29 13:12:43.565388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.886 [2024-11-29 13:12:43.565563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.886 [2024-11-29 13:12:43.565571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.886 [2024-11-29 13:12:43.565577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.886 [2024-11-29 13:12:43.565583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.886 [2024-11-29 13:12:43.577775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.886 [2024-11-29 13:12:43.578246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.886 [2024-11-29 13:12:43.578263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.886 [2024-11-29 13:12:43.578274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.886 [2024-11-29 13:12:43.578447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.886 [2024-11-29 13:12:43.578619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.886 [2024-11-29 13:12:43.578627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.886 [2024-11-29 13:12:43.578634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.886 [2024-11-29 13:12:43.578640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.886 [2024-11-29 13:12:43.590680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.886 [2024-11-29 13:12:43.591116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.886 [2024-11-29 13:12:43.591133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.886 [2024-11-29 13:12:43.591140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.886 [2024-11-29 13:12:43.591304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.886 [2024-11-29 13:12:43.591468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.886 [2024-11-29 13:12:43.591476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.886 [2024-11-29 13:12:43.591482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.886 [2024-11-29 13:12:43.591487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.886 [2024-11-29 13:12:43.603600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.886 [2024-11-29 13:12:43.604043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.886 [2024-11-29 13:12:43.604088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.886 [2024-11-29 13:12:43.604111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.886 [2024-11-29 13:12:43.604695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.886 [2024-11-29 13:12:43.604931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.886 [2024-11-29 13:12:43.604938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.886 [2024-11-29 13:12:43.604944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.886 [2024-11-29 13:12:43.604956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.886 [2024-11-29 13:12:43.616557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.886 [2024-11-29 13:12:43.616992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.886 [2024-11-29 13:12:43.617038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.886 [2024-11-29 13:12:43.617061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.886 [2024-11-29 13:12:43.617644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.886 [2024-11-29 13:12:43.618060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.886 [2024-11-29 13:12:43.618068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.886 [2024-11-29 13:12:43.618074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.886 [2024-11-29 13:12:43.618081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.886 [2024-11-29 13:12:43.629478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.886 [2024-11-29 13:12:43.629907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.886 [2024-11-29 13:12:43.629924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.886 [2024-11-29 13:12:43.629932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.886 [2024-11-29 13:12:43.630132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.886 [2024-11-29 13:12:43.630312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.886 [2024-11-29 13:12:43.630320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.886 [2024-11-29 13:12:43.630327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.886 [2024-11-29 13:12:43.630333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.886 [2024-11-29 13:12:43.642450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.886 [2024-11-29 13:12:43.642866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.886 [2024-11-29 13:12:43.642883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.887 [2024-11-29 13:12:43.642891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.887 [2024-11-29 13:12:43.643070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.887 [2024-11-29 13:12:43.643244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.887 [2024-11-29 13:12:43.643252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.887 [2024-11-29 13:12:43.643258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.887 [2024-11-29 13:12:43.643264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.887 [2024-11-29 13:12:43.655395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.887 [2024-11-29 13:12:43.655786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.887 [2024-11-29 13:12:43.655803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.887 [2024-11-29 13:12:43.655811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.887 [2024-11-29 13:12:43.655995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.887 [2024-11-29 13:12:43.656175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.887 [2024-11-29 13:12:43.656184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.887 [2024-11-29 13:12:43.656191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.887 [2024-11-29 13:12:43.656202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.887 [2024-11-29 13:12:43.668482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.887 [2024-11-29 13:12:43.668929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.887 [2024-11-29 13:12:43.668946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.887 [2024-11-29 13:12:43.668960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.887 [2024-11-29 13:12:43.669139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.887 [2024-11-29 13:12:43.669317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.887 [2024-11-29 13:12:43.669326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.887 [2024-11-29 13:12:43.669333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.887 [2024-11-29 13:12:43.669340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.887 [2024-11-29 13:12:43.681621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.887 [2024-11-29 13:12:43.682047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.887 [2024-11-29 13:12:43.682065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.887 [2024-11-29 13:12:43.682072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.887 [2024-11-29 13:12:43.682252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.887 [2024-11-29 13:12:43.682431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.887 [2024-11-29 13:12:43.682440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.887 [2024-11-29 13:12:43.682446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.887 [2024-11-29 13:12:43.682452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:43.887 [2024-11-29 13:12:43.694713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:43.887 [2024-11-29 13:12:43.695173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.887 [2024-11-29 13:12:43.695201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:43.887 [2024-11-29 13:12:43.695209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:43.887 [2024-11-29 13:12:43.695382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:43.887 [2024-11-29 13:12:43.695556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:43.887 [2024-11-29 13:12:43.695564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:43.887 [2024-11-29 13:12:43.695570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:43.887 [2024-11-29 13:12:43.695577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.148 [2024-11-29 13:12:43.707805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.148 [2024-11-29 13:12:43.708170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.148 [2024-11-29 13:12:43.708187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.148 [2024-11-29 13:12:43.708195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.148 [2024-11-29 13:12:43.708374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.148 [2024-11-29 13:12:43.708555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.148 [2024-11-29 13:12:43.708564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.148 [2024-11-29 13:12:43.708570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.148 [2024-11-29 13:12:43.708577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.148 [2024-11-29 13:12:43.720857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.148 [2024-11-29 13:12:43.721232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.148 [2024-11-29 13:12:43.721271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.148 [2024-11-29 13:12:43.721297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.148 [2024-11-29 13:12:43.721881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.148 [2024-11-29 13:12:43.722394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.148 [2024-11-29 13:12:43.722403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.148 [2024-11-29 13:12:43.722409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.148 [2024-11-29 13:12:43.722415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.148 [2024-11-29 13:12:43.733715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.148 [2024-11-29 13:12:43.734141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.148 [2024-11-29 13:12:43.734158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.148 [2024-11-29 13:12:43.734165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.148 [2024-11-29 13:12:43.734329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.148 [2024-11-29 13:12:43.734492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.148 [2024-11-29 13:12:43.734500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.148 [2024-11-29 13:12:43.734506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.148 [2024-11-29 13:12:43.734512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.148 [2024-11-29 13:12:43.746620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.148 [2024-11-29 13:12:43.747053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.148 [2024-11-29 13:12:43.747070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.148 [2024-11-29 13:12:43.747080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.148 [2024-11-29 13:12:43.747244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.148 [2024-11-29 13:12:43.747406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.148 [2024-11-29 13:12:43.747414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.148 [2024-11-29 13:12:43.747420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.148 [2024-11-29 13:12:43.747426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.148 [2024-11-29 13:12:43.759703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.148 [2024-11-29 13:12:43.760118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.148 [2024-11-29 13:12:43.760134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.148 [2024-11-29 13:12:43.760141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.148 [2024-11-29 13:12:43.760315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.148 [2024-11-29 13:12:43.760493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.148 [2024-11-29 13:12:43.760500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.148 [2024-11-29 13:12:43.760506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.148 [2024-11-29 13:12:43.760512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.148 [2024-11-29 13:12:43.772623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.148 [2024-11-29 13:12:43.773078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.148 [2024-11-29 13:12:43.773124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.148 [2024-11-29 13:12:43.773148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.148 [2024-11-29 13:12:43.773732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.148 [2024-11-29 13:12:43.774333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.148 [2024-11-29 13:12:43.774372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.148 [2024-11-29 13:12:43.774379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.148 [2024-11-29 13:12:43.774387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.148 [2024-11-29 13:12:43.785552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.148 [2024-11-29 13:12:43.785944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.148 [2024-11-29 13:12:43.786002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.148 [2024-11-29 13:12:43.786025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.148 [2024-11-29 13:12:43.786498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.148 [2024-11-29 13:12:43.786675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.148 [2024-11-29 13:12:43.786683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.148 [2024-11-29 13:12:43.786690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.148 [2024-11-29 13:12:43.786696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.148 [2024-11-29 13:12:43.798431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.148 [2024-11-29 13:12:43.798893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.148 [2024-11-29 13:12:43.798936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.148 [2024-11-29 13:12:43.798973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.148 [2024-11-29 13:12:43.799472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.148 [2024-11-29 13:12:43.799646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.148 [2024-11-29 13:12:43.799654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.148 [2024-11-29 13:12:43.799660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.148 [2024-11-29 13:12:43.799666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.148 [2024-11-29 13:12:43.811242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.148 [2024-11-29 13:12:43.811700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.148 [2024-11-29 13:12:43.811744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.148 [2024-11-29 13:12:43.811768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.148 [2024-11-29 13:12:43.812313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.148 [2024-11-29 13:12:43.812488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.148 [2024-11-29 13:12:43.812497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.148 [2024-11-29 13:12:43.812503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.149 [2024-11-29 13:12:43.812509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.149 [2024-11-29 13:12:43.824075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.149 [2024-11-29 13:12:43.824500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.149 [2024-11-29 13:12:43.824516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.149 [2024-11-29 13:12:43.824523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.149 [2024-11-29 13:12:43.824686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.149 [2024-11-29 13:12:43.824850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.149 [2024-11-29 13:12:43.824858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.149 [2024-11-29 13:12:43.824864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.149 [2024-11-29 13:12:43.824873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.149 [2024-11-29 13:12:43.836924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.149 [2024-11-29 13:12:43.837368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.149 [2024-11-29 13:12:43.837414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.149 [2024-11-29 13:12:43.837437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.149 [2024-11-29 13:12:43.838046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.149 [2024-11-29 13:12:43.838578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.149 [2024-11-29 13:12:43.838585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.149 [2024-11-29 13:12:43.838592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.149 [2024-11-29 13:12:43.838598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.149 [2024-11-29 13:12:43.849868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.149 [2024-11-29 13:12:43.850327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.149 [2024-11-29 13:12:43.850344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.149 [2024-11-29 13:12:43.850351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.149 [2024-11-29 13:12:43.850524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.149 [2024-11-29 13:12:43.850698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.149 [2024-11-29 13:12:43.850707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.149 [2024-11-29 13:12:43.850713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.149 [2024-11-29 13:12:43.850719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.149 [2024-11-29 13:12:43.862820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.149 [2024-11-29 13:12:43.863293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.149 [2024-11-29 13:12:43.863339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.149 [2024-11-29 13:12:43.863363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.149 [2024-11-29 13:12:43.863821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.149 [2024-11-29 13:12:43.864017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.149 [2024-11-29 13:12:43.864026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.149 [2024-11-29 13:12:43.864033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.149 [2024-11-29 13:12:43.864039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.149 [2024-11-29 13:12:43.875741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.149 [2024-11-29 13:12:43.876214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.149 [2024-11-29 13:12:43.876259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.149 [2024-11-29 13:12:43.876282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.149 [2024-11-29 13:12:43.876727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.149 [2024-11-29 13:12:43.876901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.149 [2024-11-29 13:12:43.876909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.149 [2024-11-29 13:12:43.876915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.149 [2024-11-29 13:12:43.876921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.149 [2024-11-29 13:12:43.888591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.149 [2024-11-29 13:12:43.888958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.149 [2024-11-29 13:12:43.888975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.149 [2024-11-29 13:12:43.888982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.149 [2024-11-29 13:12:43.889155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.149 [2024-11-29 13:12:43.889329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.149 [2024-11-29 13:12:43.889337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.149 [2024-11-29 13:12:43.889344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.149 [2024-11-29 13:12:43.889351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.149 [2024-11-29 13:12:43.901535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.149 [2024-11-29 13:12:43.901955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.149 [2024-11-29 13:12:43.901972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.149 [2024-11-29 13:12:43.901980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.149 [2024-11-29 13:12:43.902153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.149 [2024-11-29 13:12:43.902327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.149 [2024-11-29 13:12:43.902335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.149 [2024-11-29 13:12:43.902341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.149 [2024-11-29 13:12:43.902347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.149 8957.33 IOPS, 34.99 MiB/s [2024-11-29T12:12:43.969Z] [2024-11-29 13:12:43.915775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.149 [2024-11-29 13:12:43.916221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.149 [2024-11-29 13:12:43.916267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.149 [2024-11-29 13:12:43.916300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.149 [2024-11-29 13:12:43.916781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.149 [2024-11-29 13:12:43.916960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.149 [2024-11-29 13:12:43.916969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.149 [2024-11-29 13:12:43.916992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.149 [2024-11-29 13:12:43.917000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.149 [2024-11-29 13:12:43.928861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.149 [2024-11-29 13:12:43.929232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.149 [2024-11-29 13:12:43.929277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.149 [2024-11-29 13:12:43.929301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.149 [2024-11-29 13:12:43.929807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.149 [2024-11-29 13:12:43.929989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.149 [2024-11-29 13:12:43.929998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.149 [2024-11-29 13:12:43.930005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.149 [2024-11-29 13:12:43.930011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.149 [2024-11-29 13:12:43.941955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.149 [2024-11-29 13:12:43.942321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.149 [2024-11-29 13:12:43.942337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.149 [2024-11-29 13:12:43.942344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.149 [2024-11-29 13:12:43.942517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.149 [2024-11-29 13:12:43.942691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.149 [2024-11-29 13:12:43.942699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.149 [2024-11-29 13:12:43.942706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.150 [2024-11-29 13:12:43.942712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.150 [2024-11-29 13:12:43.954792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.150 [2024-11-29 13:12:43.955248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.150 [2024-11-29 13:12:43.955265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.150 [2024-11-29 13:12:43.955272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.150 [2024-11-29 13:12:43.955446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.150 [2024-11-29 13:12:43.955625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.150 [2024-11-29 13:12:43.955634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.150 [2024-11-29 13:12:43.955640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.150 [2024-11-29 13:12:43.955646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.410 [2024-11-29 13:12:43.967952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.410 [2024-11-29 13:12:43.968395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.410 [2024-11-29 13:12:43.968411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.410 [2024-11-29 13:12:43.968418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.410 [2024-11-29 13:12:43.968582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.410 [2024-11-29 13:12:43.968745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.410 [2024-11-29 13:12:43.968753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.410 [2024-11-29 13:12:43.968759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.410 [2024-11-29 13:12:43.968765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.410 [2024-11-29 13:12:43.980763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.410 [2024-11-29 13:12:43.981224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.410 [2024-11-29 13:12:43.981240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.410 [2024-11-29 13:12:43.981248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.410 [2024-11-29 13:12:43.981421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.410 [2024-11-29 13:12:43.981595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.410 [2024-11-29 13:12:43.981603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.410 [2024-11-29 13:12:43.981609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.410 [2024-11-29 13:12:43.981615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.411 [2024-11-29 13:12:43.993818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.411 [2024-11-29 13:12:43.994181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.411 [2024-11-29 13:12:43.994199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.411 [2024-11-29 13:12:43.994206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.411 [2024-11-29 13:12:43.994385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.411 [2024-11-29 13:12:43.994564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.411 [2024-11-29 13:12:43.994573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.411 [2024-11-29 13:12:43.994583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.411 [2024-11-29 13:12:43.994590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.411 [2024-11-29 13:12:44.006852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.411 [2024-11-29 13:12:44.007270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.411 [2024-11-29 13:12:44.007286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.411 [2024-11-29 13:12:44.007294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.411 [2024-11-29 13:12:44.007467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.411 [2024-11-29 13:12:44.007645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.411 [2024-11-29 13:12:44.007653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.411 [2024-11-29 13:12:44.007660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.411 [2024-11-29 13:12:44.007666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.411 [2024-11-29 13:12:44.019743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.411 [2024-11-29 13:12:44.020170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.411 [2024-11-29 13:12:44.020187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.411 [2024-11-29 13:12:44.020195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.411 [2024-11-29 13:12:44.020367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.411 [2024-11-29 13:12:44.020544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.411 [2024-11-29 13:12:44.020552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.411 [2024-11-29 13:12:44.020558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.411 [2024-11-29 13:12:44.020565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.411 [2024-11-29 13:12:44.032681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.411 [2024-11-29 13:12:44.033117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.411 [2024-11-29 13:12:44.033162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.411 [2024-11-29 13:12:44.033185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.411 [2024-11-29 13:12:44.033662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.411 [2024-11-29 13:12:44.033827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.411 [2024-11-29 13:12:44.033835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.411 [2024-11-29 13:12:44.033841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.411 [2024-11-29 13:12:44.033847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.411 [2024-11-29 13:12:44.045605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.411 [2024-11-29 13:12:44.046062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.411 [2024-11-29 13:12:44.046107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.411 [2024-11-29 13:12:44.046131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.411 [2024-11-29 13:12:44.046714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.411 [2024-11-29 13:12:44.046910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.411 [2024-11-29 13:12:44.046918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.411 [2024-11-29 13:12:44.046924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.411 [2024-11-29 13:12:44.046930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.411 [2024-11-29 13:12:44.058524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.411 [2024-11-29 13:12:44.058956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.411 [2024-11-29 13:12:44.058973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.411 [2024-11-29 13:12:44.058979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.411 [2024-11-29 13:12:44.059143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.411 [2024-11-29 13:12:44.059305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.411 [2024-11-29 13:12:44.059313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.411 [2024-11-29 13:12:44.059319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.411 [2024-11-29 13:12:44.059325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.411 [2024-11-29 13:12:44.071486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.411 [2024-11-29 13:12:44.071891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.411 [2024-11-29 13:12:44.071934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.411 [2024-11-29 13:12:44.071972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.411 [2024-11-29 13:12:44.072558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.411 [2024-11-29 13:12:44.073053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.411 [2024-11-29 13:12:44.073062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.411 [2024-11-29 13:12:44.073068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.411 [2024-11-29 13:12:44.073075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.411 [2024-11-29 13:12:44.084445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.411 [2024-11-29 13:12:44.084908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.411 [2024-11-29 13:12:44.084965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.411 [2024-11-29 13:12:44.084998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.411 [2024-11-29 13:12:44.085581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.411 [2024-11-29 13:12:44.086110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.411 [2024-11-29 13:12:44.086128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.411 [2024-11-29 13:12:44.086135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.411 [2024-11-29 13:12:44.086142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.411 [2024-11-29 13:12:44.097388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.411 [2024-11-29 13:12:44.097729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.411 [2024-11-29 13:12:44.097746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.411 [2024-11-29 13:12:44.097753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.411 [2024-11-29 13:12:44.097926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.411 [2024-11-29 13:12:44.098105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.411 [2024-11-29 13:12:44.098114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.411 [2024-11-29 13:12:44.098120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.411 [2024-11-29 13:12:44.098126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.411 [2024-11-29 13:12:44.110344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.411 [2024-11-29 13:12:44.110778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.411 [2024-11-29 13:12:44.110821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.411 [2024-11-29 13:12:44.110845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.411 [2024-11-29 13:12:44.111279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.411 [2024-11-29 13:12:44.111453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.411 [2024-11-29 13:12:44.111461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.411 [2024-11-29 13:12:44.111467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.411 [2024-11-29 13:12:44.111473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.412 [2024-11-29 13:12:44.123285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.412 [2024-11-29 13:12:44.123680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.412 [2024-11-29 13:12:44.123723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.412 [2024-11-29 13:12:44.123747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.412 [2024-11-29 13:12:44.124244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.412 [2024-11-29 13:12:44.124423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.412 [2024-11-29 13:12:44.124432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.412 [2024-11-29 13:12:44.124439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.412 [2024-11-29 13:12:44.124445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.412 [2024-11-29 13:12:44.136264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.412 [2024-11-29 13:12:44.136692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.412 [2024-11-29 13:12:44.136709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.412 [2024-11-29 13:12:44.136716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.412 [2024-11-29 13:12:44.136889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.412 [2024-11-29 13:12:44.137068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.412 [2024-11-29 13:12:44.137077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.412 [2024-11-29 13:12:44.137084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.412 [2024-11-29 13:12:44.137090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.412 [2024-11-29 13:12:44.149227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.412 [2024-11-29 13:12:44.149567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.412 [2024-11-29 13:12:44.149584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.412 [2024-11-29 13:12:44.149592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.412 [2024-11-29 13:12:44.149765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.412 [2024-11-29 13:12:44.149939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.412 [2024-11-29 13:12:44.149955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.412 [2024-11-29 13:12:44.149962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.412 [2024-11-29 13:12:44.149969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.412 [2024-11-29 13:12:44.162457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.412 [2024-11-29 13:12:44.162762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.412 [2024-11-29 13:12:44.162779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.412 [2024-11-29 13:12:44.162787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.412 [2024-11-29 13:12:44.162971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.412 [2024-11-29 13:12:44.163150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.412 [2024-11-29 13:12:44.163158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.412 [2024-11-29 13:12:44.163168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.412 [2024-11-29 13:12:44.163175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.412 [2024-11-29 13:12:44.175565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.412 [2024-11-29 13:12:44.175997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.412 [2024-11-29 13:12:44.176015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.412 [2024-11-29 13:12:44.176022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.412 [2024-11-29 13:12:44.176211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.412 [2024-11-29 13:12:44.176385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.412 [2024-11-29 13:12:44.176394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.412 [2024-11-29 13:12:44.176401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.412 [2024-11-29 13:12:44.176407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.412 [2024-11-29 13:12:44.188711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.412 [2024-11-29 13:12:44.189067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.412 [2024-11-29 13:12:44.189110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.412 [2024-11-29 13:12:44.189135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.412 [2024-11-29 13:12:44.189719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.412 [2024-11-29 13:12:44.190315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.412 [2024-11-29 13:12:44.190341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.412 [2024-11-29 13:12:44.190374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.412 [2024-11-29 13:12:44.190381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.412 [2024-11-29 13:12:44.201843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.412 [2024-11-29 13:12:44.202276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.412 [2024-11-29 13:12:44.202320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.412 [2024-11-29 13:12:44.202343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.412 [2024-11-29 13:12:44.202928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.412 [2024-11-29 13:12:44.203479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.412 [2024-11-29 13:12:44.203488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.412 [2024-11-29 13:12:44.203494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.412 [2024-11-29 13:12:44.203500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.412 [2024-11-29 13:12:44.214827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.412 [2024-11-29 13:12:44.215122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.412 [2024-11-29 13:12:44.215138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.412 [2024-11-29 13:12:44.215146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.412 [2024-11-29 13:12:44.215318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.412 [2024-11-29 13:12:44.215495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.412 [2024-11-29 13:12:44.215503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.412 [2024-11-29 13:12:44.215510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.412 [2024-11-29 13:12:44.215516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.412 [2024-11-29 13:12:44.227904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.412 [2024-11-29 13:12:44.228292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.412 [2024-11-29 13:12:44.228309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.412 [2024-11-29 13:12:44.228317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.673 [2024-11-29 13:12:44.228496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.674 [2024-11-29 13:12:44.228676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.674 [2024-11-29 13:12:44.228685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.674 [2024-11-29 13:12:44.228692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.674 [2024-11-29 13:12:44.228698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.674 [2024-11-29 13:12:44.240820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.674 [2024-11-29 13:12:44.241197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.674 [2024-11-29 13:12:44.241214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.674 [2024-11-29 13:12:44.241221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.674 [2024-11-29 13:12:44.241394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.674 [2024-11-29 13:12:44.241566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.674 [2024-11-29 13:12:44.241574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.674 [2024-11-29 13:12:44.241580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.674 [2024-11-29 13:12:44.241587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.674 [2024-11-29 13:12:44.253852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.674 [2024-11-29 13:12:44.254247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.674 [2024-11-29 13:12:44.254264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.674 [2024-11-29 13:12:44.254274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.674 [2024-11-29 13:12:44.254447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.674 [2024-11-29 13:12:44.254620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.674 [2024-11-29 13:12:44.254629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.674 [2024-11-29 13:12:44.254635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.674 [2024-11-29 13:12:44.254641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.674 [2024-11-29 13:12:44.266781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.674 [2024-11-29 13:12:44.267127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.674 [2024-11-29 13:12:44.267144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.674 [2024-11-29 13:12:44.267151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.674 [2024-11-29 13:12:44.267324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.674 [2024-11-29 13:12:44.267497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.674 [2024-11-29 13:12:44.267505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.674 [2024-11-29 13:12:44.267511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.674 [2024-11-29 13:12:44.267517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.674 [2024-11-29 13:12:44.279751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.674 [2024-11-29 13:12:44.280129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.674 [2024-11-29 13:12:44.280146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.674 [2024-11-29 13:12:44.280153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.674 [2024-11-29 13:12:44.280326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.674 [2024-11-29 13:12:44.280499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.674 [2024-11-29 13:12:44.280507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.674 [2024-11-29 13:12:44.280513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.674 [2024-11-29 13:12:44.280519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.674 [2024-11-29 13:12:44.292769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.674 [2024-11-29 13:12:44.293128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.674 [2024-11-29 13:12:44.293145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.674 [2024-11-29 13:12:44.293152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.674 [2024-11-29 13:12:44.293325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.674 [2024-11-29 13:12:44.293502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.674 [2024-11-29 13:12:44.293511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.674 [2024-11-29 13:12:44.293517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.674 [2024-11-29 13:12:44.293523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.674 [2024-11-29 13:12:44.305630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.674 [2024-11-29 13:12:44.305915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.674 [2024-11-29 13:12:44.305931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.674 [2024-11-29 13:12:44.305939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.674 [2024-11-29 13:12:44.306117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.674 [2024-11-29 13:12:44.306302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.674 [2024-11-29 13:12:44.306309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.674 [2024-11-29 13:12:44.306315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.674 [2024-11-29 13:12:44.306321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.674 [2024-11-29 13:12:44.318551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.674 [2024-11-29 13:12:44.318894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.674 [2024-11-29 13:12:44.318910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.674 [2024-11-29 13:12:44.318917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.674 [2024-11-29 13:12:44.319096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.674 [2024-11-29 13:12:44.319269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.674 [2024-11-29 13:12:44.319276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.674 [2024-11-29 13:12:44.319283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.674 [2024-11-29 13:12:44.319289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.674 [2024-11-29 13:12:44.331419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.674 [2024-11-29 13:12:44.331802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.674 [2024-11-29 13:12:44.331818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.674 [2024-11-29 13:12:44.331826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.674 [2024-11-29 13:12:44.332003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.674 [2024-11-29 13:12:44.332177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.674 [2024-11-29 13:12:44.332186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.674 [2024-11-29 13:12:44.332196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.674 [2024-11-29 13:12:44.332202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.674 [2024-11-29 13:12:44.344336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.674 [2024-11-29 13:12:44.344774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.674 [2024-11-29 13:12:44.344791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.674 [2024-11-29 13:12:44.344799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.674 [2024-11-29 13:12:44.344980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.674 [2024-11-29 13:12:44.345153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.674 [2024-11-29 13:12:44.345162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.674 [2024-11-29 13:12:44.345168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.674 [2024-11-29 13:12:44.345174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.674 [2024-11-29 13:12:44.357235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.674 [2024-11-29 13:12:44.357666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.674 [2024-11-29 13:12:44.357683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.675 [2024-11-29 13:12:44.357690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.675 [2024-11-29 13:12:44.357862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.675 [2024-11-29 13:12:44.358042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.675 [2024-11-29 13:12:44.358051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.675 [2024-11-29 13:12:44.358057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.675 [2024-11-29 13:12:44.358064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.675 [2024-11-29 13:12:44.370128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.675 [2024-11-29 13:12:44.370472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.675 [2024-11-29 13:12:44.370511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.675 [2024-11-29 13:12:44.370536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.675 [2024-11-29 13:12:44.371135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.675 [2024-11-29 13:12:44.371723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.675 [2024-11-29 13:12:44.371748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.675 [2024-11-29 13:12:44.371769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.675 [2024-11-29 13:12:44.371788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.675 [2024-11-29 13:12:44.383637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.675 [2024-11-29 13:12:44.383989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.675 [2024-11-29 13:12:44.384031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.675 [2024-11-29 13:12:44.384056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.675 [2024-11-29 13:12:44.384586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.675 [2024-11-29 13:12:44.384761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.675 [2024-11-29 13:12:44.384769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.675 [2024-11-29 13:12:44.384775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.675 [2024-11-29 13:12:44.384782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.675 [2024-11-29 13:12:44.396497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.675 [2024-11-29 13:12:44.396851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.675 [2024-11-29 13:12:44.396867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.675 [2024-11-29 13:12:44.396874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.675 [2024-11-29 13:12:44.397053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.675 [2024-11-29 13:12:44.397227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.675 [2024-11-29 13:12:44.397236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.675 [2024-11-29 13:12:44.397242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.675 [2024-11-29 13:12:44.397249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.675 [2024-11-29 13:12:44.409367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.675 [2024-11-29 13:12:44.409676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.675 [2024-11-29 13:12:44.409692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.675 [2024-11-29 13:12:44.409699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.675 [2024-11-29 13:12:44.409871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.675 [2024-11-29 13:12:44.410049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.675 [2024-11-29 13:12:44.410056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.675 [2024-11-29 13:12:44.410062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.675 [2024-11-29 13:12:44.410068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.675 [2024-11-29 13:12:44.422474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.675 [2024-11-29 13:12:44.422756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.675 [2024-11-29 13:12:44.422772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.675 [2024-11-29 13:12:44.422783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.675 [2024-11-29 13:12:44.422962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.675 [2024-11-29 13:12:44.423136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.675 [2024-11-29 13:12:44.423144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.675 [2024-11-29 13:12:44.423151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.675 [2024-11-29 13:12:44.423157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.675 [2024-11-29 13:12:44.435329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.675 [2024-11-29 13:12:44.435694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.675 [2024-11-29 13:12:44.435729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.675 [2024-11-29 13:12:44.435756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.675 [2024-11-29 13:12:44.436357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.675 [2024-11-29 13:12:44.436946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.675 [2024-11-29 13:12:44.436984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.675 [2024-11-29 13:12:44.437007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.675 [2024-11-29 13:12:44.437013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.675 [2024-11-29 13:12:44.448423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.675 [2024-11-29 13:12:44.448785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.675 [2024-11-29 13:12:44.448803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.675 [2024-11-29 13:12:44.448811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.675 [2024-11-29 13:12:44.448995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.675 [2024-11-29 13:12:44.449173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.675 [2024-11-29 13:12:44.449183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.675 [2024-11-29 13:12:44.449190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.675 [2024-11-29 13:12:44.449197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.675 [2024-11-29 13:12:44.461525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.675 [2024-11-29 13:12:44.461957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.675 [2024-11-29 13:12:44.461974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.675 [2024-11-29 13:12:44.461981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.675 [2024-11-29 13:12:44.462154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.675 [2024-11-29 13:12:44.462330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.675 [2024-11-29 13:12:44.462338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.675 [2024-11-29 13:12:44.462344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.675 [2024-11-29 13:12:44.462350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.675 [2024-11-29 13:12:44.474572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.675 [2024-11-29 13:12:44.474931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.675 [2024-11-29 13:12:44.474952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.675 [2024-11-29 13:12:44.474959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.675 [2024-11-29 13:12:44.475132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.675 [2024-11-29 13:12:44.475304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.675 [2024-11-29 13:12:44.475313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.675 [2024-11-29 13:12:44.475319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.675 [2024-11-29 13:12:44.475325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.675 [2024-11-29 13:12:44.487587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.675 [2024-11-29 13:12:44.487957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.675 [2024-11-29 13:12:44.487991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.676 [2024-11-29 13:12:44.487999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.676 [2024-11-29 13:12:44.488177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.676 [2024-11-29 13:12:44.488358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.676 [2024-11-29 13:12:44.488367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.676 [2024-11-29 13:12:44.488375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.676 [2024-11-29 13:12:44.488381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.937 [2024-11-29 13:12:44.500614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.937 [2024-11-29 13:12:44.501029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.937 [2024-11-29 13:12:44.501046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.937 [2024-11-29 13:12:44.501053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.937 [2024-11-29 13:12:44.501227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.937 [2024-11-29 13:12:44.501400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.937 [2024-11-29 13:12:44.501408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.937 [2024-11-29 13:12:44.501418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.937 [2024-11-29 13:12:44.501425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.937 [2024-11-29 13:12:44.513559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.937 [2024-11-29 13:12:44.513911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.937 [2024-11-29 13:12:44.513928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.937 [2024-11-29 13:12:44.513935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.937 [2024-11-29 13:12:44.514113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.937 [2024-11-29 13:12:44.514286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.937 [2024-11-29 13:12:44.514295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.937 [2024-11-29 13:12:44.514301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.937 [2024-11-29 13:12:44.514307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.937 [2024-11-29 13:12:44.526579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.937 [2024-11-29 13:12:44.526882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.937 [2024-11-29 13:12:44.526899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.937 [2024-11-29 13:12:44.526907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.937 [2024-11-29 13:12:44.527085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.937 [2024-11-29 13:12:44.527259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.937 [2024-11-29 13:12:44.527267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.937 [2024-11-29 13:12:44.527273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.937 [2024-11-29 13:12:44.527279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.937 [2024-11-29 13:12:44.539528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.937 [2024-11-29 13:12:44.539909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.937 [2024-11-29 13:12:44.539926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.937 [2024-11-29 13:12:44.539933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.937 [2024-11-29 13:12:44.540111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.937 [2024-11-29 13:12:44.540284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.937 [2024-11-29 13:12:44.540292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.937 [2024-11-29 13:12:44.540299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.937 [2024-11-29 13:12:44.540305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.937 [2024-11-29 13:12:44.552538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.937 [2024-11-29 13:12:44.552977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.937 [2024-11-29 13:12:44.552994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.937 [2024-11-29 13:12:44.553001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.937 [2024-11-29 13:12:44.553182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.937 [2024-11-29 13:12:44.553346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.937 [2024-11-29 13:12:44.553354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.937 [2024-11-29 13:12:44.553360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.937 [2024-11-29 13:12:44.553366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.937 [2024-11-29 13:12:44.565478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.937 [2024-11-29 13:12:44.565883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.937 [2024-11-29 13:12:44.565899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.937 [2024-11-29 13:12:44.565906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.937 [2024-11-29 13:12:44.566095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.937 [2024-11-29 13:12:44.566268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.937 [2024-11-29 13:12:44.566276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.937 [2024-11-29 13:12:44.566282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.937 [2024-11-29 13:12:44.566288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.937 [2024-11-29 13:12:44.578383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.937 [2024-11-29 13:12:44.578784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.937 [2024-11-29 13:12:44.578842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.937 [2024-11-29 13:12:44.578866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.937 [2024-11-29 13:12:44.579375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.937 [2024-11-29 13:12:44.579549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.937 [2024-11-29 13:12:44.579557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.937 [2024-11-29 13:12:44.579563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.937 [2024-11-29 13:12:44.579569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.937 [2024-11-29 13:12:44.591303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.937 [2024-11-29 13:12:44.591728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.937 [2024-11-29 13:12:44.591745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.937 [2024-11-29 13:12:44.591757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.937 [2024-11-29 13:12:44.591931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.937 [2024-11-29 13:12:44.592111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.937 [2024-11-29 13:12:44.592120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.937 [2024-11-29 13:12:44.592126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.937 [2024-11-29 13:12:44.592132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.937 [2024-11-29 13:12:44.604146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.937 [2024-11-29 13:12:44.604525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.937 [2024-11-29 13:12:44.604540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.937 [2024-11-29 13:12:44.604547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.937 [2024-11-29 13:12:44.604710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.937 [2024-11-29 13:12:44.604873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.937 [2024-11-29 13:12:44.604881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.937 [2024-11-29 13:12:44.604886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.937 [2024-11-29 13:12:44.604892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.937 [2024-11-29 13:12:44.617019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.937 [2024-11-29 13:12:44.617460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.937 [2024-11-29 13:12:44.617477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.937 [2024-11-29 13:12:44.617484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.937 [2024-11-29 13:12:44.617658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.937 [2024-11-29 13:12:44.617830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.937 [2024-11-29 13:12:44.617839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.937 [2024-11-29 13:12:44.617846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.937 [2024-11-29 13:12:44.617852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.937 [2024-11-29 13:12:44.629900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.937 [2024-11-29 13:12:44.630329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.937 [2024-11-29 13:12:44.630346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.937 [2024-11-29 13:12:44.630353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.937 [2024-11-29 13:12:44.630526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.937 [2024-11-29 13:12:44.630702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.937 [2024-11-29 13:12:44.630710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.937 [2024-11-29 13:12:44.630717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.937 [2024-11-29 13:12:44.630723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.937 [2024-11-29 13:12:44.642780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.937 [2024-11-29 13:12:44.643149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.937 [2024-11-29 13:12:44.643194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.937 [2024-11-29 13:12:44.643218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.937 [2024-11-29 13:12:44.643723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.937 [2024-11-29 13:12:44.643895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.937 [2024-11-29 13:12:44.643904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.937 [2024-11-29 13:12:44.643910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.937 [2024-11-29 13:12:44.643916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.937 [2024-11-29 13:12:44.655761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.938 [2024-11-29 13:12:44.656187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.938 [2024-11-29 13:12:44.656232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.938 [2024-11-29 13:12:44.656255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.938 [2024-11-29 13:12:44.656711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.938 [2024-11-29 13:12:44.656885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.938 [2024-11-29 13:12:44.656893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.938 [2024-11-29 13:12:44.656899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.938 [2024-11-29 13:12:44.656905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.938 [2024-11-29 13:12:44.668620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.938 [2024-11-29 13:12:44.669062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.938 [2024-11-29 13:12:44.669108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.938 [2024-11-29 13:12:44.669131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.938 [2024-11-29 13:12:44.669616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.938 [2024-11-29 13:12:44.669789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.938 [2024-11-29 13:12:44.669797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.938 [2024-11-29 13:12:44.669804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.938 [2024-11-29 13:12:44.669813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.938 [2024-11-29 13:12:44.681527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.938 [2024-11-29 13:12:44.681933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.938 [2024-11-29 13:12:44.681988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.938 [2024-11-29 13:12:44.682012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.938 [2024-11-29 13:12:44.682546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.938 [2024-11-29 13:12:44.682710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.938 [2024-11-29 13:12:44.682718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.938 [2024-11-29 13:12:44.682723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.938 [2024-11-29 13:12:44.682730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.938 [2024-11-29 13:12:44.694603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.938 [2024-11-29 13:12:44.695059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.938 [2024-11-29 13:12:44.695077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.938 [2024-11-29 13:12:44.695085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.938 [2024-11-29 13:12:44.695263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.938 [2024-11-29 13:12:44.695441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.938 [2024-11-29 13:12:44.695450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.938 [2024-11-29 13:12:44.695457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.938 [2024-11-29 13:12:44.695463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.938 [2024-11-29 13:12:44.707740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.938 [2024-11-29 13:12:44.708179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.938 [2024-11-29 13:12:44.708196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.938 [2024-11-29 13:12:44.708204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.938 [2024-11-29 13:12:44.708382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.938 [2024-11-29 13:12:44.708561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.938 [2024-11-29 13:12:44.708570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.938 [2024-11-29 13:12:44.708576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.938 [2024-11-29 13:12:44.708583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.938 [2024-11-29 13:12:44.720843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.938 [2024-11-29 13:12:44.721312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.938 [2024-11-29 13:12:44.721330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.938 [2024-11-29 13:12:44.721338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.938 [2024-11-29 13:12:44.721516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.938 [2024-11-29 13:12:44.721696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.938 [2024-11-29 13:12:44.721704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.938 [2024-11-29 13:12:44.721712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.938 [2024-11-29 13:12:44.721718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.938 [2024-11-29 13:12:44.733987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.938 [2024-11-29 13:12:44.734417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.938 [2024-11-29 13:12:44.734434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.938 [2024-11-29 13:12:44.734442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.938 [2024-11-29 13:12:44.734620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.938 [2024-11-29 13:12:44.734799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.938 [2024-11-29 13:12:44.734808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.938 [2024-11-29 13:12:44.734814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.938 [2024-11-29 13:12:44.734821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.938 [2024-11-29 13:12:44.747125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.938 [2024-11-29 13:12:44.747552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.938 [2024-11-29 13:12:44.747596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:44.938 [2024-11-29 13:12:44.747620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:44.938 [2024-11-29 13:12:44.748217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:44.938 [2024-11-29 13:12:44.748742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.938 [2024-11-29 13:12:44.748750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.938 [2024-11-29 13:12:44.748757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.938 [2024-11-29 13:12:44.748763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.198 [2024-11-29 13:12:44.760136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.198 [2024-11-29 13:12:44.760486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.198 [2024-11-29 13:12:44.760503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.198 [2024-11-29 13:12:44.760514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.198 [2024-11-29 13:12:44.760693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.198 [2024-11-29 13:12:44.760874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.198 [2024-11-29 13:12:44.760884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.198 [2024-11-29 13:12:44.760891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.198 [2024-11-29 13:12:44.760897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.198 [2024-11-29 13:12:44.773182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.198 [2024-11-29 13:12:44.773572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.198 [2024-11-29 13:12:44.773617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.198 [2024-11-29 13:12:44.773641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.198 [2024-11-29 13:12:44.774241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.198 [2024-11-29 13:12:44.774831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.199 [2024-11-29 13:12:44.774856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.199 [2024-11-29 13:12:44.774876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.199 [2024-11-29 13:12:44.774896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.199 [2024-11-29 13:12:44.786146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.199 [2024-11-29 13:12:44.786525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.199 [2024-11-29 13:12:44.786542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.199 [2024-11-29 13:12:44.786549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.199 [2024-11-29 13:12:44.786723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.199 [2024-11-29 13:12:44.786897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.199 [2024-11-29 13:12:44.786905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.199 [2024-11-29 13:12:44.786911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.199 [2024-11-29 13:12:44.786917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.199 [2024-11-29 13:12:44.798943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.199 [2024-11-29 13:12:44.799379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.199 [2024-11-29 13:12:44.799422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.199 [2024-11-29 13:12:44.799446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.199 [2024-11-29 13:12:44.799976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.199 [2024-11-29 13:12:44.800154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.199 [2024-11-29 13:12:44.800163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.199 [2024-11-29 13:12:44.800169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.199 [2024-11-29 13:12:44.800175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.199 [2024-11-29 13:12:44.811774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.199 [2024-11-29 13:12:44.812186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.199 [2024-11-29 13:12:44.812203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.199 [2024-11-29 13:12:44.812210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.199 [2024-11-29 13:12:44.812382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.199 [2024-11-29 13:12:44.812555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.199 [2024-11-29 13:12:44.812563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.199 [2024-11-29 13:12:44.812569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.199 [2024-11-29 13:12:44.812576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.199 [2024-11-29 13:12:44.824588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.199 [2024-11-29 13:12:44.825022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.199 [2024-11-29 13:12:44.825039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.199 [2024-11-29 13:12:44.825046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.199 [2024-11-29 13:12:44.825219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.199 [2024-11-29 13:12:44.825391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.199 [2024-11-29 13:12:44.825399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.199 [2024-11-29 13:12:44.825405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.199 [2024-11-29 13:12:44.825412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.199 [2024-11-29 13:12:44.837472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.199 [2024-11-29 13:12:44.837876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.199 [2024-11-29 13:12:44.837891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.199 [2024-11-29 13:12:44.837898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.199 [2024-11-29 13:12:44.838094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.199 [2024-11-29 13:12:44.838269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.199 [2024-11-29 13:12:44.838277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.199 [2024-11-29 13:12:44.838283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.199 [2024-11-29 13:12:44.838292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.199 [2024-11-29 13:12:44.850414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.199 [2024-11-29 13:12:44.850848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.199 [2024-11-29 13:12:44.850892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.199 [2024-11-29 13:12:44.850915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.199 [2024-11-29 13:12:44.851512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.199 [2024-11-29 13:12:44.852046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.199 [2024-11-29 13:12:44.852054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.199 [2024-11-29 13:12:44.852060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.199 [2024-11-29 13:12:44.852067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.199 [2024-11-29 13:12:44.863340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.199 [2024-11-29 13:12:44.863783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.199 [2024-11-29 13:12:44.863826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.199 [2024-11-29 13:12:44.863849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.199 [2024-11-29 13:12:44.864446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.199 [2024-11-29 13:12:44.865048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.199 [2024-11-29 13:12:44.865056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.199 [2024-11-29 13:12:44.865062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.199 [2024-11-29 13:12:44.865069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.199 [2024-11-29 13:12:44.876257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.199 [2024-11-29 13:12:44.876706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.199 [2024-11-29 13:12:44.876750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.199 [2024-11-29 13:12:44.876773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.199 [2024-11-29 13:12:44.877371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.199 [2024-11-29 13:12:44.877702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.199 [2024-11-29 13:12:44.877713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.199 [2024-11-29 13:12:44.877722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.199 [2024-11-29 13:12:44.877731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.199 [2024-11-29 13:12:44.889643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.199 [2024-11-29 13:12:44.890102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.199 [2024-11-29 13:12:44.890146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.199 [2024-11-29 13:12:44.890169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.199 [2024-11-29 13:12:44.890753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.199 [2024-11-29 13:12:44.891208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.199 [2024-11-29 13:12:44.891217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.199 [2024-11-29 13:12:44.891223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.199 [2024-11-29 13:12:44.891231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.199 [2024-11-29 13:12:44.902613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.199 [2024-11-29 13:12:44.903028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.199 [2024-11-29 13:12:44.903045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.199 [2024-11-29 13:12:44.903053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.199 [2024-11-29 13:12:44.903239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.199 [2024-11-29 13:12:44.903414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.200 [2024-11-29 13:12:44.903422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.200 [2024-11-29 13:12:44.903428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.200 [2024-11-29 13:12:44.903434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.200 [2024-11-29 13:12:44.915598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.200 [2024-11-29 13:12:44.916066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.200 [2024-11-29 13:12:44.916083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.200 [2024-11-29 13:12:44.916091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.200 [2024-11-29 13:12:44.916265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.200 6718.00 IOPS, 26.24 MiB/s [2024-11-29T12:12:45.020Z] [2024-11-29 13:12:44.917692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.200 [2024-11-29 13:12:44.917699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.200 [2024-11-29 13:12:44.917705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.200 [2024-11-29 13:12:44.917712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.200 [2024-11-29 13:12:44.928539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.200 [2024-11-29 13:12:44.928998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.200 [2024-11-29 13:12:44.929016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.200 [2024-11-29 13:12:44.929027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.200 [2024-11-29 13:12:44.929200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.200 [2024-11-29 13:12:44.929373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.200 [2024-11-29 13:12:44.929381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.200 [2024-11-29 13:12:44.929387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.200 [2024-11-29 13:12:44.929394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.200 [2024-11-29 13:12:44.941540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.200 [2024-11-29 13:12:44.941972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.200 [2024-11-29 13:12:44.941989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.200 [2024-11-29 13:12:44.941997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.200 [2024-11-29 13:12:44.942170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.200 [2024-11-29 13:12:44.942346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.200 [2024-11-29 13:12:44.942354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.200 [2024-11-29 13:12:44.942360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.200 [2024-11-29 13:12:44.942366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.200 [2024-11-29 13:12:44.954581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.200 [2024-11-29 13:12:44.955014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.200 [2024-11-29 13:12:44.955032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.200 [2024-11-29 13:12:44.955039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.200 [2024-11-29 13:12:44.955217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.200 [2024-11-29 13:12:44.955395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.200 [2024-11-29 13:12:44.955403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.200 [2024-11-29 13:12:44.955410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.200 [2024-11-29 13:12:44.955416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.200 [2024-11-29 13:12:44.967788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.200 [2024-11-29 13:12:44.968246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.200 [2024-11-29 13:12:44.968290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.200 [2024-11-29 13:12:44.968315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.200 [2024-11-29 13:12:44.968901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.200 [2024-11-29 13:12:44.969411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.200 [2024-11-29 13:12:44.969421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.200 [2024-11-29 13:12:44.969427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.200 [2024-11-29 13:12:44.969433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.200 [2024-11-29 13:12:44.980685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.200 [2024-11-29 13:12:44.981101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.200 [2024-11-29 13:12:44.981146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.200 [2024-11-29 13:12:44.981169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.200 [2024-11-29 13:12:44.981752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.200 [2024-11-29 13:12:44.982300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.200 [2024-11-29 13:12:44.982309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.200 [2024-11-29 13:12:44.982315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.200 [2024-11-29 13:12:44.982322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.200 [2024-11-29 13:12:44.993620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.200 [2024-11-29 13:12:44.994059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.200 [2024-11-29 13:12:44.994104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.200 [2024-11-29 13:12:44.994127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.200 [2024-11-29 13:12:44.994711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.200 [2024-11-29 13:12:44.994907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.200 [2024-11-29 13:12:44.994915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.200 [2024-11-29 13:12:44.994922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.200 [2024-11-29 13:12:44.994928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.200 [2024-11-29 13:12:45.006561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.200 [2024-11-29 13:12:45.006967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.200 [2024-11-29 13:12:45.006983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.200 [2024-11-29 13:12:45.006990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.200 [2024-11-29 13:12:45.007154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.200 [2024-11-29 13:12:45.007316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.200 [2024-11-29 13:12:45.007324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.200 [2024-11-29 13:12:45.007334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.200 [2024-11-29 13:12:45.007340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.476 [2024-11-29 13:12:45.019544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.476 [2024-11-29 13:12:45.019968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.476 [2024-11-29 13:12:45.019985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.476 [2024-11-29 13:12:45.019992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.476 [2024-11-29 13:12:45.020174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.476 [2024-11-29 13:12:45.020341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.476 [2024-11-29 13:12:45.020349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.476 [2024-11-29 13:12:45.020355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.476 [2024-11-29 13:12:45.020361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.476 [2024-11-29 13:12:45.032455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.476 [2024-11-29 13:12:45.032857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.476 [2024-11-29 13:12:45.032873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.476 [2024-11-29 13:12:45.032880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.476 [2024-11-29 13:12:45.033070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.476 [2024-11-29 13:12:45.033244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.476 [2024-11-29 13:12:45.033252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.476 [2024-11-29 13:12:45.033259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.476 [2024-11-29 13:12:45.033265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.476 [2024-11-29 13:12:45.045303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.476 [2024-11-29 13:12:45.045729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.477 [2024-11-29 13:12:45.045745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.477 [2024-11-29 13:12:45.045752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.477 [2024-11-29 13:12:45.045925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.477 [2024-11-29 13:12:45.046104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.477 [2024-11-29 13:12:45.046113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.477 [2024-11-29 13:12:45.046120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.477 [2024-11-29 13:12:45.046126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.477 [2024-11-29 13:12:45.058165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.477 [2024-11-29 13:12:45.058605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.477 [2024-11-29 13:12:45.058649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.477 [2024-11-29 13:12:45.058672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.477 [2024-11-29 13:12:45.059187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.477 [2024-11-29 13:12:45.059443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.477 [2024-11-29 13:12:45.059454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.477 [2024-11-29 13:12:45.059464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.477 [2024-11-29 13:12:45.059473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.477 [2024-11-29 13:12:45.071892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.477 [2024-11-29 13:12:45.072294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.477 [2024-11-29 13:12:45.072340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.477 [2024-11-29 13:12:45.072363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.477 [2024-11-29 13:12:45.072946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.477 [2024-11-29 13:12:45.073548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.477 [2024-11-29 13:12:45.073580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.477 [2024-11-29 13:12:45.073587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.477 [2024-11-29 13:12:45.073594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.477 [2024-11-29 13:12:45.084747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.477 [2024-11-29 13:12:45.085178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.477 [2024-11-29 13:12:45.085195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.477 [2024-11-29 13:12:45.085203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.477 [2024-11-29 13:12:45.085375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.477 [2024-11-29 13:12:45.085548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.477 [2024-11-29 13:12:45.085555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.477 [2024-11-29 13:12:45.085561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.477 [2024-11-29 13:12:45.085568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.477 [2024-11-29 13:12:45.097601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.477 [2024-11-29 13:12:45.098045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.477 [2024-11-29 13:12:45.098091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.477 [2024-11-29 13:12:45.098122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.477 [2024-11-29 13:12:45.098614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.477 [2024-11-29 13:12:45.098788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.477 [2024-11-29 13:12:45.098796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.477 [2024-11-29 13:12:45.098803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.477 [2024-11-29 13:12:45.098809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.477 [2024-11-29 13:12:45.110436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.477 [2024-11-29 13:12:45.110863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.477 [2024-11-29 13:12:45.110879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.477 [2024-11-29 13:12:45.110886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.477 [2024-11-29 13:12:45.111084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.477 [2024-11-29 13:12:45.111272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.477 [2024-11-29 13:12:45.111280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.477 [2024-11-29 13:12:45.111286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.477 [2024-11-29 13:12:45.111292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.477 [2024-11-29 13:12:45.123306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.477 [2024-11-29 13:12:45.123728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.477 [2024-11-29 13:12:45.123745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.477 [2024-11-29 13:12:45.123752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.477 [2024-11-29 13:12:45.123925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.477 [2024-11-29 13:12:45.124105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.477 [2024-11-29 13:12:45.124114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.477 [2024-11-29 13:12:45.124120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.477 [2024-11-29 13:12:45.124127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.477 [2024-11-29 13:12:45.136255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.477 [2024-11-29 13:12:45.136700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.477 [2024-11-29 13:12:45.136744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.477 [2024-11-29 13:12:45.136767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.477 [2024-11-29 13:12:45.137364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.477 [2024-11-29 13:12:45.137810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.477 [2024-11-29 13:12:45.137819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.477 [2024-11-29 13:12:45.137825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.477 [2024-11-29 13:12:45.137831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.477 [2024-11-29 13:12:45.149098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.477 [2024-11-29 13:12:45.149505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.477 [2024-11-29 13:12:45.149521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.477 [2024-11-29 13:12:45.149528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.477 [2024-11-29 13:12:45.149691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.477 [2024-11-29 13:12:45.149854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.477 [2024-11-29 13:12:45.149862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.477 [2024-11-29 13:12:45.149868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.477 [2024-11-29 13:12:45.149874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.477 [2024-11-29 13:12:45.161910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.477 [2024-11-29 13:12:45.162313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.477 [2024-11-29 13:12:45.162330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.477 [2024-11-29 13:12:45.162337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.477 [2024-11-29 13:12:45.162499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.477 [2024-11-29 13:12:45.162662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.477 [2024-11-29 13:12:45.162670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.477 [2024-11-29 13:12:45.162676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.477 [2024-11-29 13:12:45.162682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.477 [2024-11-29 13:12:45.174735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.477 [2024-11-29 13:12:45.175180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.477 [2024-11-29 13:12:45.175197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.477 [2024-11-29 13:12:45.175204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.477 [2024-11-29 13:12:45.175376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.477 [2024-11-29 13:12:45.175548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.477 [2024-11-29 13:12:45.175556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.477 [2024-11-29 13:12:45.175566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.477 [2024-11-29 13:12:45.175573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.477 [2024-11-29 13:12:45.187657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.477 [2024-11-29 13:12:45.188056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.477 [2024-11-29 13:12:45.188072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.477 [2024-11-29 13:12:45.188079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.477 [2024-11-29 13:12:45.188242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.477 [2024-11-29 13:12:45.188406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.477 [2024-11-29 13:12:45.188414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.477 [2024-11-29 13:12:45.188420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.477 [2024-11-29 13:12:45.188426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.477 [2024-11-29 13:12:45.200525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.477 [2024-11-29 13:12:45.200945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.477 [2024-11-29 13:12:45.201002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.477 [2024-11-29 13:12:45.201026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.477 [2024-11-29 13:12:45.201610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.477 [2024-11-29 13:12:45.202019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.477 [2024-11-29 13:12:45.202031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.477 [2024-11-29 13:12:45.202040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.478 [2024-11-29 13:12:45.202049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.478 [2024-11-29 13:12:45.213942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.478 [2024-11-29 13:12:45.214362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.478 [2024-11-29 13:12:45.214405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.478 [2024-11-29 13:12:45.214428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.478 [2024-11-29 13:12:45.214974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.478 [2024-11-29 13:12:45.215169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.478 [2024-11-29 13:12:45.215178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.478 [2024-11-29 13:12:45.215185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.478 [2024-11-29 13:12:45.215191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.478 [2024-11-29 13:12:45.227068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.478 [2024-11-29 13:12:45.227452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.478 [2024-11-29 13:12:45.227470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.478 [2024-11-29 13:12:45.227478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.478 [2024-11-29 13:12:45.227656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.478 [2024-11-29 13:12:45.227835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.478 [2024-11-29 13:12:45.227845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.478 [2024-11-29 13:12:45.227853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.478 [2024-11-29 13:12:45.227860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.478 [2024-11-29 13:12:45.239964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.478 [2024-11-29 13:12:45.240377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.478 [2024-11-29 13:12:45.240393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.478 [2024-11-29 13:12:45.240401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.478 [2024-11-29 13:12:45.240573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.478 [2024-11-29 13:12:45.240745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.478 [2024-11-29 13:12:45.240753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.478 [2024-11-29 13:12:45.240759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.478 [2024-11-29 13:12:45.240765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.478 [2024-11-29 13:12:45.252827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.478 [2024-11-29 13:12:45.253252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.478 [2024-11-29 13:12:45.253268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.479 [2024-11-29 13:12:45.253276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.479 [2024-11-29 13:12:45.253448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.479 [2024-11-29 13:12:45.253622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.479 [2024-11-29 13:12:45.253630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.479 [2024-11-29 13:12:45.253636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.479 [2024-11-29 13:12:45.253643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.479 [2024-11-29 13:12:45.265664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.479 [2024-11-29 13:12:45.266093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.479 [2024-11-29 13:12:45.266138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.479 [2024-11-29 13:12:45.266169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.479 [2024-11-29 13:12:45.266753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.479 [2024-11-29 13:12:45.267140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.479 [2024-11-29 13:12:45.267149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.479 [2024-11-29 13:12:45.267155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.479 [2024-11-29 13:12:45.267161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.479 [2024-11-29 13:12:45.278661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.479 [2024-11-29 13:12:45.279064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.479 [2024-11-29 13:12:45.279082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.479 [2024-11-29 13:12:45.279089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.479 [2024-11-29 13:12:45.279263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.479 [2024-11-29 13:12:45.279436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.479 [2024-11-29 13:12:45.279444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.479 [2024-11-29 13:12:45.279450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.479 [2024-11-29 13:12:45.279456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.479 [2024-11-29 13:12:45.291705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.479 [2024-11-29 13:12:45.292138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.479 [2024-11-29 13:12:45.292155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.479 [2024-11-29 13:12:45.292163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.479 [2024-11-29 13:12:45.292353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.740 [2024-11-29 13:12:45.292531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.740 [2024-11-29 13:12:45.292540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.740 [2024-11-29 13:12:45.292547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.740 [2024-11-29 13:12:45.292553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.740 [2024-11-29 13:12:45.304639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.740 [2024-11-29 13:12:45.305072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.740 [2024-11-29 13:12:45.305089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.740 [2024-11-29 13:12:45.305096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.740 [2024-11-29 13:12:45.305270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.740 [2024-11-29 13:12:45.305445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.740 [2024-11-29 13:12:45.305454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.740 [2024-11-29 13:12:45.305460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.740 [2024-11-29 13:12:45.305467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.740 [2024-11-29 13:12:45.317532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.740 [2024-11-29 13:12:45.317909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.740 [2024-11-29 13:12:45.317925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.740 [2024-11-29 13:12:45.317932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.740 [2024-11-29 13:12:45.318143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.740 [2024-11-29 13:12:45.318322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.740 [2024-11-29 13:12:45.318331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.740 [2024-11-29 13:12:45.318337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.740 [2024-11-29 13:12:45.318343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.740 [2024-11-29 13:12:45.330456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.740 [2024-11-29 13:12:45.330864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.740 [2024-11-29 13:12:45.330881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.740 [2024-11-29 13:12:45.330888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.740 [2024-11-29 13:12:45.331079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.740 [2024-11-29 13:12:45.331252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.740 [2024-11-29 13:12:45.331260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.740 [2024-11-29 13:12:45.331266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.740 [2024-11-29 13:12:45.331272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.740 [2024-11-29 13:12:45.343382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.740 [2024-11-29 13:12:45.343715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.740 [2024-11-29 13:12:45.343732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.740 [2024-11-29 13:12:45.343739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.740 [2024-11-29 13:12:45.343902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.740 [2024-11-29 13:12:45.344092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.740 [2024-11-29 13:12:45.344101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.740 [2024-11-29 13:12:45.344111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.740 [2024-11-29 13:12:45.344117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.740 [2024-11-29 13:12:45.356202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.740 [2024-11-29 13:12:45.356602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.740 [2024-11-29 13:12:45.356619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.740 [2024-11-29 13:12:45.356626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.740 [2024-11-29 13:12:45.356789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.740 [2024-11-29 13:12:45.356958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.740 [2024-11-29 13:12:45.356967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.740 [2024-11-29 13:12:45.356989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.740 [2024-11-29 13:12:45.356996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.740 [2024-11-29 13:12:45.369040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.740 [2024-11-29 13:12:45.369369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.740 [2024-11-29 13:12:45.369385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.740 [2024-11-29 13:12:45.369392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.741 [2024-11-29 13:12:45.369555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.741 [2024-11-29 13:12:45.369719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.741 [2024-11-29 13:12:45.369727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.741 [2024-11-29 13:12:45.369732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.741 [2024-11-29 13:12:45.369738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.741 [2024-11-29 13:12:45.381953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.741 [2024-11-29 13:12:45.382365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.741 [2024-11-29 13:12:45.382381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.741 [2024-11-29 13:12:45.382388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.741 [2024-11-29 13:12:45.382551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.741 [2024-11-29 13:12:45.382715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.741 [2024-11-29 13:12:45.382722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.741 [2024-11-29 13:12:45.382728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.741 [2024-11-29 13:12:45.382734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.741 [2024-11-29 13:12:45.394834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.741 [2024-11-29 13:12:45.395189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.741 [2024-11-29 13:12:45.395204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.741 [2024-11-29 13:12:45.395212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.741 [2024-11-29 13:12:45.395385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.741 [2024-11-29 13:12:45.395557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.741 [2024-11-29 13:12:45.395566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.741 [2024-11-29 13:12:45.395572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.741 [2024-11-29 13:12:45.395578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.741 [2024-11-29 13:12:45.407773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.741 [2024-11-29 13:12:45.408175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.741 [2024-11-29 13:12:45.408192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.741 [2024-11-29 13:12:45.408199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.741 [2024-11-29 13:12:45.408371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.741 [2024-11-29 13:12:45.408543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.741 [2024-11-29 13:12:45.408552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.741 [2024-11-29 13:12:45.408558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.741 [2024-11-29 13:12:45.408564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.741 [2024-11-29 13:12:45.420622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.741 [2024-11-29 13:12:45.421050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.741 [2024-11-29 13:12:45.421067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.741 [2024-11-29 13:12:45.421074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.741 [2024-11-29 13:12:45.421247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.741 [2024-11-29 13:12:45.421420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.741 [2024-11-29 13:12:45.421428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.741 [2024-11-29 13:12:45.421435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.741 [2024-11-29 13:12:45.421441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.741 [2024-11-29 13:12:45.433531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.741 [2024-11-29 13:12:45.433974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.741 [2024-11-29 13:12:45.433990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.741 [2024-11-29 13:12:45.434001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.741 [2024-11-29 13:12:45.434175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.741 [2024-11-29 13:12:45.434348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.741 [2024-11-29 13:12:45.434356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.741 [2024-11-29 13:12:45.434362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.741 [2024-11-29 13:12:45.434369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.741 [2024-11-29 13:12:45.446514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.741 [2024-11-29 13:12:45.446946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.741 [2024-11-29 13:12:45.446969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.741 [2024-11-29 13:12:45.446976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.741 [2024-11-29 13:12:45.447149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.741 [2024-11-29 13:12:45.447322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.741 [2024-11-29 13:12:45.447330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.741 [2024-11-29 13:12:45.447336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.741 [2024-11-29 13:12:45.447343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.741 [2024-11-29 13:12:45.459393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.741 [2024-11-29 13:12:45.459800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.741 [2024-11-29 13:12:45.459817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.741 [2024-11-29 13:12:45.459824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.741 [2024-11-29 13:12:45.460005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.741 [2024-11-29 13:12:45.460179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.741 [2024-11-29 13:12:45.460187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.741 [2024-11-29 13:12:45.460193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.741 [2024-11-29 13:12:45.460199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.741 [2024-11-29 13:12:45.472520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.741 [2024-11-29 13:12:45.472937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.741 [2024-11-29 13:12:45.472960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.741 [2024-11-29 13:12:45.472968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.741 [2024-11-29 13:12:45.473147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.741 [2024-11-29 13:12:45.473328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.741 [2024-11-29 13:12:45.473336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.741 [2024-11-29 13:12:45.473343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.741 [2024-11-29 13:12:45.473349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.741 [2024-11-29 13:12:45.485535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.741 [2024-11-29 13:12:45.485989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.741 [2024-11-29 13:12:45.486034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.741 [2024-11-29 13:12:45.486057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.741 [2024-11-29 13:12:45.486513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.741 [2024-11-29 13:12:45.486692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.741 [2024-11-29 13:12:45.486700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.741 [2024-11-29 13:12:45.486707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.741 [2024-11-29 13:12:45.486714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.741 [2024-11-29 13:12:45.498625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.741 [2024-11-29 13:12:45.498979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.741 [2024-11-29 13:12:45.498997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.741 [2024-11-29 13:12:45.499004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.742 [2024-11-29 13:12:45.499182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.742 [2024-11-29 13:12:45.499364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.742 [2024-11-29 13:12:45.499372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.742 [2024-11-29 13:12:45.499379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.742 [2024-11-29 13:12:45.499385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.742 [2024-11-29 13:12:45.511456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.742 [2024-11-29 13:12:45.511887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.742 [2024-11-29 13:12:45.511904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.742 [2024-11-29 13:12:45.511911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.742 [2024-11-29 13:12:45.512090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.742 [2024-11-29 13:12:45.512264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.742 [2024-11-29 13:12:45.512272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.742 [2024-11-29 13:12:45.512283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.742 [2024-11-29 13:12:45.512289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.742 [2024-11-29 13:12:45.524308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.742 [2024-11-29 13:12:45.524674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.742 [2024-11-29 13:12:45.524718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.742 [2024-11-29 13:12:45.524741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.742 [2024-11-29 13:12:45.525341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.742 [2024-11-29 13:12:45.525596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.742 [2024-11-29 13:12:45.525604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.742 [2024-11-29 13:12:45.525610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.742 [2024-11-29 13:12:45.525616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.742 [2024-11-29 13:12:45.537256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.742 [2024-11-29 13:12:45.537684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.742 [2024-11-29 13:12:45.537700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.742 [2024-11-29 13:12:45.537708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.742 [2024-11-29 13:12:45.537881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.742 [2024-11-29 13:12:45.538067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.742 [2024-11-29 13:12:45.538077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.742 [2024-11-29 13:12:45.538083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.742 [2024-11-29 13:12:45.538090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.742 [2024-11-29 13:12:45.550112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.742 [2024-11-29 13:12:45.550544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.742 [2024-11-29 13:12:45.550561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:45.742 [2024-11-29 13:12:45.550568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:45.742 [2024-11-29 13:12:45.550741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:45.742 [2024-11-29 13:12:45.550918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.742 [2024-11-29 13:12:45.550926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.742 [2024-11-29 13:12:45.550933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.742 [2024-11-29 13:12:45.550939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.002 [2024-11-29 13:12:45.563134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.002 [2024-11-29 13:12:45.563574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-11-29 13:12:45.563591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.002 [2024-11-29 13:12:45.563598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.002 [2024-11-29 13:12:45.563779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.002 [2024-11-29 13:12:45.563965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.002 [2024-11-29 13:12:45.563974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.002 [2024-11-29 13:12:45.563981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.002 [2024-11-29 13:12:45.563987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.002 [2024-11-29 13:12:45.576157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.002 [2024-11-29 13:12:45.576496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-11-29 13:12:45.576512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.002 [2024-11-29 13:12:45.576520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.002 [2024-11-29 13:12:45.576693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.002 [2024-11-29 13:12:45.576868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.002 [2024-11-29 13:12:45.576876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.002 [2024-11-29 13:12:45.576883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.002 [2024-11-29 13:12:45.576889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.002 [2024-11-29 13:12:45.589399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.002 [2024-11-29 13:12:45.589840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-11-29 13:12:45.589885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.002 [2024-11-29 13:12:45.589908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.002 [2024-11-29 13:12:45.590506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.002 [2024-11-29 13:12:45.590995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.002 [2024-11-29 13:12:45.591003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.002 [2024-11-29 13:12:45.591010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.002 [2024-11-29 13:12:45.591017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.002 [2024-11-29 13:12:45.602392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.002 [2024-11-29 13:12:45.602818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-11-29 13:12:45.602834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.002 [2024-11-29 13:12:45.602845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.002 [2024-11-29 13:12:45.603024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.002 [2024-11-29 13:12:45.603198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.002 [2024-11-29 13:12:45.603206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.002 [2024-11-29 13:12:45.603212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.002 [2024-11-29 13:12:45.603218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.002 [2024-11-29 13:12:45.615321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.002 [2024-11-29 13:12:45.615690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-11-29 13:12:45.615707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.002 [2024-11-29 13:12:45.615714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.002 [2024-11-29 13:12:45.615887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.002 [2024-11-29 13:12:45.616068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.002 [2024-11-29 13:12:45.616077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.002 [2024-11-29 13:12:45.616083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.002 [2024-11-29 13:12:45.616089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.002 [2024-11-29 13:12:45.628228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.002 [2024-11-29 13:12:45.628569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-11-29 13:12:45.628585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.002 [2024-11-29 13:12:45.628592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.002 [2024-11-29 13:12:45.628765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.002 [2024-11-29 13:12:45.628939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.002 [2024-11-29 13:12:45.628955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.002 [2024-11-29 13:12:45.628962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.002 [2024-11-29 13:12:45.628969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.002 [2024-11-29 13:12:45.641204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.002 [2024-11-29 13:12:45.641501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-11-29 13:12:45.641518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.002 [2024-11-29 13:12:45.641525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.002 [2024-11-29 13:12:45.641697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.002 [2024-11-29 13:12:45.641874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.002 [2024-11-29 13:12:45.641882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.002 [2024-11-29 13:12:45.641889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.002 [2024-11-29 13:12:45.641895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.002 [2024-11-29 13:12:45.654266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.002 [2024-11-29 13:12:45.654567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-11-29 13:12:45.654583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.002 [2024-11-29 13:12:45.654591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.002 [2024-11-29 13:12:45.654763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.002 [2024-11-29 13:12:45.654937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.002 [2024-11-29 13:12:45.654945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.003 [2024-11-29 13:12:45.654959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.003 [2024-11-29 13:12:45.654966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.003 [2024-11-29 13:12:45.667329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.003 [2024-11-29 13:12:45.667684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-11-29 13:12:45.667701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.003 [2024-11-29 13:12:45.667708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.003 [2024-11-29 13:12:45.667881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.003 [2024-11-29 13:12:45.668071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.003 [2024-11-29 13:12:45.668080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.003 [2024-11-29 13:12:45.668087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.003 [2024-11-29 13:12:45.668093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.003 [2024-11-29 13:12:45.680232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.003 [2024-11-29 13:12:45.680616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-11-29 13:12:45.680633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.003 [2024-11-29 13:12:45.680640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.003 [2024-11-29 13:12:45.680813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.003 [2024-11-29 13:12:45.680993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.003 [2024-11-29 13:12:45.681002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.003 [2024-11-29 13:12:45.681013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.003 [2024-11-29 13:12:45.681020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.003 [2024-11-29 13:12:45.693249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.003 [2024-11-29 13:12:45.693543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-11-29 13:12:45.693559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.003 [2024-11-29 13:12:45.693566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.003 [2024-11-29 13:12:45.693739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.003 [2024-11-29 13:12:45.693913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.003 [2024-11-29 13:12:45.693921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.003 [2024-11-29 13:12:45.693928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.003 [2024-11-29 13:12:45.693934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.003 [2024-11-29 13:12:45.706179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.003 [2024-11-29 13:12:45.706472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-11-29 13:12:45.706507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.003 [2024-11-29 13:12:45.706532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.003 [2024-11-29 13:12:45.707071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.003 [2024-11-29 13:12:45.707245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.003 [2024-11-29 13:12:45.707254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.003 [2024-11-29 13:12:45.707260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.003 [2024-11-29 13:12:45.707267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.003 [2024-11-29 13:12:45.719200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.003 [2024-11-29 13:12:45.719525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-11-29 13:12:45.719541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.003 [2024-11-29 13:12:45.719548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.003 [2024-11-29 13:12:45.719711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.003 [2024-11-29 13:12:45.719875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.003 [2024-11-29 13:12:45.719883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.003 [2024-11-29 13:12:45.719889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.003 [2024-11-29 13:12:45.719895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.003 [2024-11-29 13:12:45.732174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.003 [2024-11-29 13:12:45.732547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-11-29 13:12:45.732563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.003 [2024-11-29 13:12:45.732570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.003 [2024-11-29 13:12:45.732743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.003 [2024-11-29 13:12:45.732917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.003 [2024-11-29 13:12:45.732925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.003 [2024-11-29 13:12:45.732932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.003 [2024-11-29 13:12:45.732938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.003 [2024-11-29 13:12:45.745304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.003 [2024-11-29 13:12:45.745713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-11-29 13:12:45.745757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.003 [2024-11-29 13:12:45.745781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.003 [2024-11-29 13:12:45.746245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.003 [2024-11-29 13:12:45.746424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.003 [2024-11-29 13:12:45.746433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.003 [2024-11-29 13:12:45.746440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.003 [2024-11-29 13:12:45.746447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.003 [2024-11-29 13:12:45.758348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.003 [2024-11-29 13:12:45.758762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-11-29 13:12:45.758779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.003 [2024-11-29 13:12:45.758786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.003 [2024-11-29 13:12:45.758966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.003 [2024-11-29 13:12:45.759140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.003 [2024-11-29 13:12:45.759148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.003 [2024-11-29 13:12:45.759155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.003 [2024-11-29 13:12:45.759161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.003 [2024-11-29 13:12:45.771337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.003 [2024-11-29 13:12:45.771686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-11-29 13:12:45.771703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.003 [2024-11-29 13:12:45.771713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.003 [2024-11-29 13:12:45.771886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.003 [2024-11-29 13:12:45.772067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.003 [2024-11-29 13:12:45.772077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.003 [2024-11-29 13:12:45.772083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.003 [2024-11-29 13:12:45.772090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.003 [2024-11-29 13:12:45.784433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.003 [2024-11-29 13:12:45.784869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-11-29 13:12:45.784912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.003 [2024-11-29 13:12:45.784935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.003 [2024-11-29 13:12:45.785400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.004 [2024-11-29 13:12:45.785575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.004 [2024-11-29 13:12:45.785584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.004 [2024-11-29 13:12:45.785591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.004 [2024-11-29 13:12:45.785597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.004 [2024-11-29 13:12:45.797403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.004 [2024-11-29 13:12:45.797787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-11-29 13:12:45.797830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.004 [2024-11-29 13:12:45.797853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.004 [2024-11-29 13:12:45.798375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.004 [2024-11-29 13:12:45.798550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.004 [2024-11-29 13:12:45.798559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.004 [2024-11-29 13:12:45.798565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.004 [2024-11-29 13:12:45.798572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.004 [2024-11-29 13:12:45.810516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.004 [2024-11-29 13:12:45.810882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-11-29 13:12:45.810899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.004 [2024-11-29 13:12:45.810906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.004 [2024-11-29 13:12:45.811102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.004 [2024-11-29 13:12:45.811284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.004 [2024-11-29 13:12:45.811293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.004 [2024-11-29 13:12:45.811300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.004 [2024-11-29 13:12:45.811306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.264 [2024-11-29 13:12:45.823651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.264 [2024-11-29 13:12:45.824041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.264 [2024-11-29 13:12:45.824059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.264 [2024-11-29 13:12:45.824067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.264 [2024-11-29 13:12:45.824255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.264 [2024-11-29 13:12:45.824430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.264 [2024-11-29 13:12:45.824439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.264 [2024-11-29 13:12:45.824446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.264 [2024-11-29 13:12:45.824452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.264 [2024-11-29 13:12:45.836653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.264 [2024-11-29 13:12:45.836954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.264 [2024-11-29 13:12:45.836971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.264 [2024-11-29 13:12:45.836979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.264 [2024-11-29 13:12:45.837152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.264 [2024-11-29 13:12:45.837325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.264 [2024-11-29 13:12:45.837334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.264 [2024-11-29 13:12:45.837340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.264 [2024-11-29 13:12:45.837346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.264 [2024-11-29 13:12:45.849796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.264 [2024-11-29 13:12:45.850209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.264 [2024-11-29 13:12:45.850256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.264 [2024-11-29 13:12:45.850280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.264 [2024-11-29 13:12:45.850864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.264 [2024-11-29 13:12:45.851467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.264 [2024-11-29 13:12:45.851476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.264 [2024-11-29 13:12:45.851488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.264 [2024-11-29 13:12:45.851495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.264 [2024-11-29 13:12:45.862768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.264 [2024-11-29 13:12:45.863138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.264 [2024-11-29 13:12:45.863155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.264 [2024-11-29 13:12:45.863163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.264 [2024-11-29 13:12:45.863336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.264 [2024-11-29 13:12:45.863509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.264 [2024-11-29 13:12:45.863517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.264 [2024-11-29 13:12:45.863524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.264 [2024-11-29 13:12:45.863530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.264 [2024-11-29 13:12:45.875740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.264 [2024-11-29 13:12:45.876199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.264 [2024-11-29 13:12:45.876216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.264 [2024-11-29 13:12:45.876223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.264 [2024-11-29 13:12:45.876396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.264 [2024-11-29 13:12:45.876569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.264 [2024-11-29 13:12:45.876577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.264 [2024-11-29 13:12:45.876584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.264 [2024-11-29 13:12:45.876590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.264 [2024-11-29 13:12:45.888820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.264 [2024-11-29 13:12:45.889102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.264 [2024-11-29 13:12:45.889119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.264 [2024-11-29 13:12:45.889126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.264 [2024-11-29 13:12:45.889299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.264 [2024-11-29 13:12:45.889472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.264 [2024-11-29 13:12:45.889481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.264 [2024-11-29 13:12:45.889487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.264 [2024-11-29 13:12:45.889494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.264 [2024-11-29 13:12:45.901759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.264 [2024-11-29 13:12:45.902160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.264 [2024-11-29 13:12:45.902177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.264 [2024-11-29 13:12:45.902184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.264 [2024-11-29 13:12:45.902357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.264 [2024-11-29 13:12:45.902531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.264 [2024-11-29 13:12:45.902539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.264 [2024-11-29 13:12:45.902546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.264 [2024-11-29 13:12:45.902552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.264 [2024-11-29 13:12:45.914851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.264 [2024-11-29 13:12:45.915247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.264 [2024-11-29 13:12:45.915264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.264 [2024-11-29 13:12:45.915271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.264 [2024-11-29 13:12:45.915444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.264 [2024-11-29 13:12:45.915616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.264 [2024-11-29 13:12:45.915624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.264 [2024-11-29 13:12:45.915630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.264 [2024-11-29 13:12:45.915637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.264 5374.40 IOPS, 20.99 MiB/s [2024-11-29T12:12:46.084Z] [2024-11-29 13:12:45.927707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.264 [2024-11-29 13:12:45.928074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.265 [2024-11-29 13:12:45.928091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.265 [2024-11-29 13:12:45.928098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.265 [2024-11-29 13:12:45.928272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.265 [2024-11-29 13:12:45.928446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.265 [2024-11-29 13:12:45.928454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.265 [2024-11-29 13:12:45.928461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.265 [2024-11-29 13:12:45.928467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.265 [2024-11-29 13:12:45.940712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.265 [2024-11-29 13:12:45.941037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.265 [2024-11-29 13:12:45.941054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.265 [2024-11-29 13:12:45.941066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.265 [2024-11-29 13:12:45.941253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.265 [2024-11-29 13:12:45.941427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.265 [2024-11-29 13:12:45.941436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.265 [2024-11-29 13:12:45.941443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.265 [2024-11-29 13:12:45.941450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.265 [2024-11-29 13:12:45.953745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.265 [2024-11-29 13:12:45.954163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.265 [2024-11-29 13:12:45.954180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.265 [2024-11-29 13:12:45.954188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.265 [2024-11-29 13:12:45.954361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.265 [2024-11-29 13:12:45.954535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.265 [2024-11-29 13:12:45.954544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.265 [2024-11-29 13:12:45.954551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.265 [2024-11-29 13:12:45.954558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.265 [2024-11-29 13:12:45.966958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.265 [2024-11-29 13:12:45.967306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.265 [2024-11-29 13:12:45.967323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.265 [2024-11-29 13:12:45.967330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.265 [2024-11-29 13:12:45.967508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.265 [2024-11-29 13:12:45.967687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.265 [2024-11-29 13:12:45.967695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.265 [2024-11-29 13:12:45.967702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.265 [2024-11-29 13:12:45.967708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.265 [2024-11-29 13:12:45.980187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.265 [2024-11-29 13:12:45.980643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.265 [2024-11-29 13:12:45.980659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.265 [2024-11-29 13:12:45.980667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.265 [2024-11-29 13:12:45.980845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.265 [2024-11-29 13:12:45.981035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.265 [2024-11-29 13:12:45.981045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.265 [2024-11-29 13:12:45.981052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.265 [2024-11-29 13:12:45.981059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.265 [2024-11-29 13:12:45.993075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.265 [2024-11-29 13:12:45.993442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.265 [2024-11-29 13:12:45.993458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.265 [2024-11-29 13:12:45.993466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.265 [2024-11-29 13:12:45.993639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.265 [2024-11-29 13:12:45.993817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.265 [2024-11-29 13:12:45.993825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.265 [2024-11-29 13:12:45.993832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.265 [2024-11-29 13:12:45.993838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.265 [2024-11-29 13:12:46.006286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.265 [2024-11-29 13:12:46.006600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.265 [2024-11-29 13:12:46.006616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.265 [2024-11-29 13:12:46.006624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.265 [2024-11-29 13:12:46.006802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.265 [2024-11-29 13:12:46.006986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.265 [2024-11-29 13:12:46.006995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.265 [2024-11-29 13:12:46.007002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.265 [2024-11-29 13:12:46.007008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.265 [2024-11-29 13:12:46.019219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.265 [2024-11-29 13:12:46.019682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.265 [2024-11-29 13:12:46.019726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.265 [2024-11-29 13:12:46.019750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.265 [2024-11-29 13:12:46.020350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.265 [2024-11-29 13:12:46.020897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.265 [2024-11-29 13:12:46.020905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.265 [2024-11-29 13:12:46.020915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.265 [2024-11-29 13:12:46.020921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.265 [2024-11-29 13:12:46.032048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.265 [2024-11-29 13:12:46.032471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.265 [2024-11-29 13:12:46.032487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.265 [2024-11-29 13:12:46.032494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.265 [2024-11-29 13:12:46.032658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.265 [2024-11-29 13:12:46.032820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.265 [2024-11-29 13:12:46.032828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.265 [2024-11-29 13:12:46.032834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.265 [2024-11-29 13:12:46.032840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.265 [2024-11-29 13:12:46.045004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.265 [2024-11-29 13:12:46.045347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.265 [2024-11-29 13:12:46.045363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.265 [2024-11-29 13:12:46.045370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.265 [2024-11-29 13:12:46.045533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.265 [2024-11-29 13:12:46.045696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.265 [2024-11-29 13:12:46.045703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.265 [2024-11-29 13:12:46.045709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.265 [2024-11-29 13:12:46.045715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.265 [2024-11-29 13:12:46.057824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.265 [2024-11-29 13:12:46.058272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.266 [2024-11-29 13:12:46.058311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.266 [2024-11-29 13:12:46.058336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.266 [2024-11-29 13:12:46.058919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.266 [2024-11-29 13:12:46.059149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.266 [2024-11-29 13:12:46.059157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.266 [2024-11-29 13:12:46.059163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.266 [2024-11-29 13:12:46.059170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.266 [2024-11-29 13:12:46.070651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.266 [2024-11-29 13:12:46.071056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.266 [2024-11-29 13:12:46.071073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.266 [2024-11-29 13:12:46.071080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.266 [2024-11-29 13:12:46.071244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.266 [2024-11-29 13:12:46.071408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.266 [2024-11-29 13:12:46.071416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.266 [2024-11-29 13:12:46.071422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.266 [2024-11-29 13:12:46.071428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.526 [2024-11-29 13:12:46.083810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.526 [2024-11-29 13:12:46.084261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.526 [2024-11-29 13:12:46.084279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.526 [2024-11-29 13:12:46.084286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.526 [2024-11-29 13:12:46.084459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.526 [2024-11-29 13:12:46.084634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.526 [2024-11-29 13:12:46.084642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.526 [2024-11-29 13:12:46.084648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.526 [2024-11-29 13:12:46.084655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.526 [2024-11-29 13:12:46.096722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.526 [2024-11-29 13:12:46.097056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.526 [2024-11-29 13:12:46.097072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.526 [2024-11-29 13:12:46.097079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.526 [2024-11-29 13:12:46.097243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.526 [2024-11-29 13:12:46.097406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.526 [2024-11-29 13:12:46.097414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.526 [2024-11-29 13:12:46.097420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.526 [2024-11-29 13:12:46.097426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.526 [2024-11-29 13:12:46.109529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.526 [2024-11-29 13:12:46.109959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.526 [2024-11-29 13:12:46.109996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.526 [2024-11-29 13:12:46.110029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.526 [2024-11-29 13:12:46.110613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.526 [2024-11-29 13:12:46.111195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.526 [2024-11-29 13:12:46.111204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.526 [2024-11-29 13:12:46.111210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.526 [2024-11-29 13:12:46.111217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.526 [2024-11-29 13:12:46.122339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.526 [2024-11-29 13:12:46.122795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.526 [2024-11-29 13:12:46.122812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.526 [2024-11-29 13:12:46.122819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.526 [2024-11-29 13:12:46.122998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.526 [2024-11-29 13:12:46.123172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.526 [2024-11-29 13:12:46.123180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.526 [2024-11-29 13:12:46.123187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.526 [2024-11-29 13:12:46.123193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.526 [2024-11-29 13:12:46.135265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.526 [2024-11-29 13:12:46.135705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.526 [2024-11-29 13:12:46.135749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.526 [2024-11-29 13:12:46.135773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.526 [2024-11-29 13:12:46.136193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.526 [2024-11-29 13:12:46.136369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.526 [2024-11-29 13:12:46.136378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.526 [2024-11-29 13:12:46.136385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.526 [2024-11-29 13:12:46.136391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.526 [2024-11-29 13:12:46.148151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.526 [2024-11-29 13:12:46.148578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.526 [2024-11-29 13:12:46.148594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.526 [2024-11-29 13:12:46.148601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.526 [2024-11-29 13:12:46.149177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.526 [2024-11-29 13:12:46.149355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.526 [2024-11-29 13:12:46.149363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.526 [2024-11-29 13:12:46.149369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.526 [2024-11-29 13:12:46.149375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.526 [2024-11-29 13:12:46.161007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.526 [2024-11-29 13:12:46.161455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.526 [2024-11-29 13:12:46.161506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.526 [2024-11-29 13:12:46.161529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.526 [2024-11-29 13:12:46.162085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.526 [2024-11-29 13:12:46.162265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.526 [2024-11-29 13:12:46.162273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.526 [2024-11-29 13:12:46.162280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.526 [2024-11-29 13:12:46.162287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.526 [2024-11-29 13:12:46.173896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.526 [2024-11-29 13:12:46.174333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.526 [2024-11-29 13:12:46.174350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.526 [2024-11-29 13:12:46.174357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.526 [2024-11-29 13:12:46.174530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.526 [2024-11-29 13:12:46.174707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.526 [2024-11-29 13:12:46.174716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.526 [2024-11-29 13:12:46.174722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.526 [2024-11-29 13:12:46.174728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.526 [2024-11-29 13:12:46.186826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.526 [2024-11-29 13:12:46.187275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.526 [2024-11-29 13:12:46.187313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.526 [2024-11-29 13:12:46.187338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.526 [2024-11-29 13:12:46.187921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.526 [2024-11-29 13:12:46.188196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.526 [2024-11-29 13:12:46.188205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.526 [2024-11-29 13:12:46.188215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.527 [2024-11-29 13:12:46.188221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.527 [2024-11-29 13:12:46.199649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.527 [2024-11-29 13:12:46.200089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.527 [2024-11-29 13:12:46.200134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.527 [2024-11-29 13:12:46.200157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.527 [2024-11-29 13:12:46.200740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.527 [2024-11-29 13:12:46.201319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.527 [2024-11-29 13:12:46.201327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.527 [2024-11-29 13:12:46.201333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.527 [2024-11-29 13:12:46.201339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.527 [2024-11-29 13:12:46.212597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.527 [2024-11-29 13:12:46.212980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.527 [2024-11-29 13:12:46.212998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.527 [2024-11-29 13:12:46.213005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.527 [2024-11-29 13:12:46.213185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.527 [2024-11-29 13:12:46.213349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.527 [2024-11-29 13:12:46.213357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.527 [2024-11-29 13:12:46.213363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.527 [2024-11-29 13:12:46.213369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.527 [2024-11-29 13:12:46.225478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.527 [2024-11-29 13:12:46.225922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.527 [2024-11-29 13:12:46.225979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.527 [2024-11-29 13:12:46.226003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.527 [2024-11-29 13:12:46.226489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.527 [2024-11-29 13:12:46.226662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.527 [2024-11-29 13:12:46.226670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.527 [2024-11-29 13:12:46.226677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.527 [2024-11-29 13:12:46.226683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.527 [2024-11-29 13:12:46.238470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.527 [2024-11-29 13:12:46.238916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.527 [2024-11-29 13:12:46.238970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.527 [2024-11-29 13:12:46.238995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.527 [2024-11-29 13:12:46.239473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.527 [2024-11-29 13:12:46.239647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.527 [2024-11-29 13:12:46.239655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.527 [2024-11-29 13:12:46.239662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.527 [2024-11-29 13:12:46.239668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.527 [2024-11-29 13:12:46.251340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.527 [2024-11-29 13:12:46.251806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.527 [2024-11-29 13:12:46.251823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.527 [2024-11-29 13:12:46.251831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.527 [2024-11-29 13:12:46.252014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.527 [2024-11-29 13:12:46.252193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.527 [2024-11-29 13:12:46.252201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.527 [2024-11-29 13:12:46.252208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.527 [2024-11-29 13:12:46.252214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.527 [2024-11-29 13:12:46.264550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.527 [2024-11-29 13:12:46.265014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.527 [2024-11-29 13:12:46.265032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.527 [2024-11-29 13:12:46.265039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.527 [2024-11-29 13:12:46.265217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.527 [2024-11-29 13:12:46.265397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.527 [2024-11-29 13:12:46.265405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.527 [2024-11-29 13:12:46.265413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.527 [2024-11-29 13:12:46.265420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.527 [2024-11-29 13:12:46.277576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.527 [2024-11-29 13:12:46.278044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.527 [2024-11-29 13:12:46.278061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.527 [2024-11-29 13:12:46.278072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.527 [2024-11-29 13:12:46.278245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.527 [2024-11-29 13:12:46.278419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.527 [2024-11-29 13:12:46.278427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.527 [2024-11-29 13:12:46.278433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.527 [2024-11-29 13:12:46.278440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.527 [2024-11-29 13:12:46.290476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.527 [2024-11-29 13:12:46.290876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.527 [2024-11-29 13:12:46.290892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.527 [2024-11-29 13:12:46.290899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.527 [2024-11-29 13:12:46.291090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.527 [2024-11-29 13:12:46.291264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.527 [2024-11-29 13:12:46.291272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.527 [2024-11-29 13:12:46.291278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.527 [2024-11-29 13:12:46.291285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.527 [2024-11-29 13:12:46.303379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.527 [2024-11-29 13:12:46.303806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.527 [2024-11-29 13:12:46.303822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.527 [2024-11-29 13:12:46.303828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.527 [2024-11-29 13:12:46.304015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.527 [2024-11-29 13:12:46.304189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.527 [2024-11-29 13:12:46.304197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.527 [2024-11-29 13:12:46.304204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.527 [2024-11-29 13:12:46.304210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.527 [2024-11-29 13:12:46.316240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.527 [2024-11-29 13:12:46.316673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.527 [2024-11-29 13:12:46.316702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.527 [2024-11-29 13:12:46.316726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.527 [2024-11-29 13:12:46.317299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.527 [2024-11-29 13:12:46.317475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.527 [2024-11-29 13:12:46.317484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.528 [2024-11-29 13:12:46.317490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.528 [2024-11-29 13:12:46.317496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.528 [2024-11-29 13:12:46.329171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.528 [2024-11-29 13:12:46.329602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.528 [2024-11-29 13:12:46.329618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.528 [2024-11-29 13:12:46.329626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.528 [2024-11-29 13:12:46.329789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.528 [2024-11-29 13:12:46.329957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.528 [2024-11-29 13:12:46.329966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.528 [2024-11-29 13:12:46.329972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.528 [2024-11-29 13:12:46.329995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.528 [2024-11-29 13:12:46.342243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.528 [2024-11-29 13:12:46.342713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.528 [2024-11-29 13:12:46.342760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.528 [2024-11-29 13:12:46.342784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.528 [2024-11-29 13:12:46.343404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.528 [2024-11-29 13:12:46.343842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.528 [2024-11-29 13:12:46.343851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.528 [2024-11-29 13:12:46.343858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.528 [2024-11-29 13:12:46.343865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.788 [2024-11-29 13:12:46.355102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.788 [2024-11-29 13:12:46.355556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.788 [2024-11-29 13:12:46.355572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.788 [2024-11-29 13:12:46.355580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.788 [2024-11-29 13:12:46.355753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.788 [2024-11-29 13:12:46.355925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.788 [2024-11-29 13:12:46.355933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.788 [2024-11-29 13:12:46.355944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.788 [2024-11-29 13:12:46.355956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.788 [2024-11-29 13:12:46.367938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.788 [2024-11-29 13:12:46.368387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.788 [2024-11-29 13:12:46.368431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.788 [2024-11-29 13:12:46.368455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.788 [2024-11-29 13:12:46.369054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.789 [2024-11-29 13:12:46.369642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.789 [2024-11-29 13:12:46.369671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.789 [2024-11-29 13:12:46.369677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.789 [2024-11-29 13:12:46.369684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.789 [2024-11-29 13:12:46.380815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.789 [2024-11-29 13:12:46.381250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.789 [2024-11-29 13:12:46.381296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.789 [2024-11-29 13:12:46.381319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.789 [2024-11-29 13:12:46.381799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.789 [2024-11-29 13:12:46.381978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.789 [2024-11-29 13:12:46.381987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.789 [2024-11-29 13:12:46.381994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.789 [2024-11-29 13:12:46.382017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.789 [2024-11-29 13:12:46.393649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.789 [2024-11-29 13:12:46.394079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.789 [2024-11-29 13:12:46.394096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.789 [2024-11-29 13:12:46.394103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.789 [2024-11-29 13:12:46.394266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.789 [2024-11-29 13:12:46.394429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.789 [2024-11-29 13:12:46.394437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.789 [2024-11-29 13:12:46.394443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.789 [2024-11-29 13:12:46.394449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2146186 Killed "${NVMF_APP[@]}" "$@" 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.789 [2024-11-29 13:12:46.406850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2147577 00:28:46.789 [2024-11-29 13:12:46.407273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.789 [2024-11-29 13:12:46.407291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.789 [2024-11-29 13:12:46.407299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2147577 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:46.789 [2024-11-29 13:12:46.407477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.789 [2024-11-29 13:12:46.407656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.789 [2024-11-29 13:12:46.407664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.789 [2024-11-29 13:12:46.407671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.789 [2024-11-29 13:12:46.407677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2147577 ']' 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.789 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:46.789 [2024-11-29 13:12:46.419962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.789 [2024-11-29 13:12:46.420377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.789 [2024-11-29 13:12:46.420394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.789 [2024-11-29 13:12:46.420401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.789 [2024-11-29 13:12:46.420580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.789 [2024-11-29 13:12:46.420759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.789 [2024-11-29 13:12:46.420767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.789 [2024-11-29 13:12:46.420773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.789 [2024-11-29 13:12:46.420780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.789 [2024-11-29 13:12:46.433063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.789 [2024-11-29 13:12:46.433506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.789 [2024-11-29 13:12:46.433523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.789 [2024-11-29 13:12:46.433530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.789 [2024-11-29 13:12:46.433709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.789 [2024-11-29 13:12:46.433888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.789 [2024-11-29 13:12:46.433896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.789 [2024-11-29 13:12:46.433903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.789 [2024-11-29 13:12:46.433910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.789 [2024-11-29 13:12:46.446145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.789 [2024-11-29 13:12:46.446530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.789 [2024-11-29 13:12:46.446547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.789 [2024-11-29 13:12:46.446555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.789 [2024-11-29 13:12:46.446732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.789 [2024-11-29 13:12:46.446911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.789 [2024-11-29 13:12:46.446919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.789 [2024-11-29 13:12:46.446926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.789 [2024-11-29 13:12:46.446933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.789 [2024-11-29 13:12:46.455575] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:28:46.789 [2024-11-29 13:12:46.455614] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.789 [2024-11-29 13:12:46.459274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.789 [2024-11-29 13:12:46.459713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.789 [2024-11-29 13:12:46.459729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.789 [2024-11-29 13:12:46.459737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.789 [2024-11-29 13:12:46.459916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.789 [2024-11-29 13:12:46.460101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.789 [2024-11-29 13:12:46.460111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.789 [2024-11-29 13:12:46.460117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.789 [2024-11-29 13:12:46.460134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.789 [2024-11-29 13:12:46.472433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.789 [2024-11-29 13:12:46.472892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.789 [2024-11-29 13:12:46.472909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.789 [2024-11-29 13:12:46.472917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.789 [2024-11-29 13:12:46.473101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.790 [2024-11-29 13:12:46.473280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.790 [2024-11-29 13:12:46.473289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.790 [2024-11-29 13:12:46.473296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.790 [2024-11-29 13:12:46.473302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.790 [2024-11-29 13:12:46.485527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.790 [2024-11-29 13:12:46.485966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.790 [2024-11-29 13:12:46.485983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.790 [2024-11-29 13:12:46.485991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.790 [2024-11-29 13:12:46.486170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.790 [2024-11-29 13:12:46.486348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.790 [2024-11-29 13:12:46.486357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.790 [2024-11-29 13:12:46.486363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.790 [2024-11-29 13:12:46.486370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.790 [2024-11-29 13:12:46.498594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.790 [2024-11-29 13:12:46.499059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.790 [2024-11-29 13:12:46.499076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.790 [2024-11-29 13:12:46.499084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.790 [2024-11-29 13:12:46.499268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.790 [2024-11-29 13:12:46.499441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.790 [2024-11-29 13:12:46.499450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.790 [2024-11-29 13:12:46.499456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.790 [2024-11-29 13:12:46.499463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.790 [2024-11-29 13:12:46.511660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.790 [2024-11-29 13:12:46.512024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.790 [2024-11-29 13:12:46.512047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.790 [2024-11-29 13:12:46.512054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.790 [2024-11-29 13:12:46.512233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.790 [2024-11-29 13:12:46.512412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.790 [2024-11-29 13:12:46.512421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.790 [2024-11-29 13:12:46.512429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.790 [2024-11-29 13:12:46.512436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.790 [2024-11-29 13:12:46.522040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:46.790 [2024-11-29 13:12:46.524859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.790 [2024-11-29 13:12:46.525289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.790 [2024-11-29 13:12:46.525307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.790 [2024-11-29 13:12:46.525315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.790 [2024-11-29 13:12:46.525495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.790 [2024-11-29 13:12:46.525672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.790 [2024-11-29 13:12:46.525681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.790 [2024-11-29 13:12:46.525688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.790 [2024-11-29 13:12:46.525694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.790 [2024-11-29 13:12:46.537990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.790 [2024-11-29 13:12:46.538350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.790 [2024-11-29 13:12:46.538367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.790 [2024-11-29 13:12:46.538375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.790 [2024-11-29 13:12:46.538553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.790 [2024-11-29 13:12:46.538732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.790 [2024-11-29 13:12:46.538741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.790 [2024-11-29 13:12:46.538748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.790 [2024-11-29 13:12:46.538754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.790 [2024-11-29 13:12:46.551159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.790 [2024-11-29 13:12:46.551610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.790 [2024-11-29 13:12:46.551626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.790 [2024-11-29 13:12:46.551638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.790 [2024-11-29 13:12:46.551817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.790 [2024-11-29 13:12:46.552003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.790 [2024-11-29 13:12:46.552012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.790 [2024-11-29 13:12:46.552019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.790 [2024-11-29 13:12:46.552025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.790 [2024-11-29 13:12:46.564279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.790 [2024-11-29 13:12:46.564738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.790 [2024-11-29 13:12:46.564755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.790 [2024-11-29 13:12:46.564762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.790 [2024-11-29 13:12:46.564941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.790 [2024-11-29 13:12:46.565125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in err[2024-11-29 13:12:46.565116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.790 or state 00:28:46.790 [2024-11-29 13:12:46.565137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.790 [2024-11-29 13:12:46.565140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.790 [2024-11-29 13:12:46.565144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.790 [2024-11-29 13:12:46.565148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.790 [2024-11-29 13:12:46.565152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.790 [2024-11-29 13:12:46.565156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.790 [2024-11-29 13:12:46.565162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.790 [2024-11-29 13:12:46.566537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.790 [2024-11-29 13:12:46.566562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.790 [2024-11-29 13:12:46.566564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.790 [2024-11-29 13:12:46.577438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.790 [2024-11-29 13:12:46.577832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.790 [2024-11-29 13:12:46.577851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.790 [2024-11-29 13:12:46.577860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.790 [2024-11-29 13:12:46.578044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.790 [2024-11-29 13:12:46.578224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.790 [2024-11-29 13:12:46.578233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.790 [2024-11-29 13:12:46.578240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.790 [2024-11-29 13:12:46.578253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.790 [2024-11-29 13:12:46.590548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.790 [2024-11-29 13:12:46.591021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.790 [2024-11-29 13:12:46.591042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.790 [2024-11-29 13:12:46.591051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.790 [2024-11-29 13:12:46.591233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.790 [2024-11-29 13:12:46.591412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.791 [2024-11-29 13:12:46.591421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.791 [2024-11-29 13:12:46.591428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.791 [2024-11-29 13:12:46.591435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.791 [2024-11-29 13:12:46.603732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.791 [2024-11-29 13:12:46.604209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.791 [2024-11-29 13:12:46.604229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:46.791 [2024-11-29 13:12:46.604238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:46.791 [2024-11-29 13:12:46.604419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:46.791 [2024-11-29 13:12:46.604600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.791 [2024-11-29 13:12:46.604610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.791 [2024-11-29 13:12:46.604617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.791 [2024-11-29 13:12:46.604624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.051 [2024-11-29 13:12:46.616942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.051 [2024-11-29 13:12:46.617412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.051 [2024-11-29 13:12:46.617432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:47.051 [2024-11-29 13:12:46.617441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:47.051 [2024-11-29 13:12:46.617621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:47.051 [2024-11-29 13:12:46.617801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.051 [2024-11-29 13:12:46.617809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.051 [2024-11-29 13:12:46.617816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.051 [2024-11-29 13:12:46.617824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.051 [2024-11-29 13:12:46.630097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.051 [2024-11-29 13:12:46.630529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.051 [2024-11-29 13:12:46.630549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:47.051 [2024-11-29 13:12:46.630558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:47.051 [2024-11-29 13:12:46.630738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:47.051 [2024-11-29 13:12:46.630918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.051 [2024-11-29 13:12:46.630926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.051 [2024-11-29 13:12:46.630933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.051 [2024-11-29 13:12:46.630940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.051 [2024-11-29 13:12:46.643231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.051 [2024-11-29 13:12:46.643649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.051 [2024-11-29 13:12:46.643667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:47.051 [2024-11-29 13:12:46.643675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:47.051 [2024-11-29 13:12:46.643855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:47.051 [2024-11-29 13:12:46.644040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.051 [2024-11-29 13:12:46.644050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.051 [2024-11-29 13:12:46.644057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.051 [2024-11-29 13:12:46.644063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.051 [2024-11-29 13:12:46.656340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.051 [2024-11-29 13:12:46.656691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.051 [2024-11-29 13:12:46.656708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:47.051 [2024-11-29 13:12:46.656716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:47.051 [2024-11-29 13:12:46.656896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:47.051 [2024-11-29 13:12:46.657078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.051 [2024-11-29 13:12:46.657087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.051 [2024-11-29 13:12:46.657094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.051 [2024-11-29 13:12:46.657101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.051 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.051 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:47.051 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:47.051 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:47.051 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.051 [2024-11-29 13:12:46.669535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.051 [2024-11-29 13:12:46.669979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.052 [2024-11-29 13:12:46.669996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:47.052 [2024-11-29 13:12:46.670004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:47.052 [2024-11-29 13:12:46.670183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:47.052 [2024-11-29 13:12:46.670362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.052 [2024-11-29 13:12:46.670371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.052 [2024-11-29 13:12:46.670379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.052 [2024-11-29 13:12:46.670385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.052 [2024-11-29 13:12:46.682670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.052 [2024-11-29 13:12:46.683137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.052 [2024-11-29 13:12:46.683154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:47.052 [2024-11-29 13:12:46.683162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:47.052 [2024-11-29 13:12:46.683341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:47.052 [2024-11-29 13:12:46.683521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.052 [2024-11-29 13:12:46.683529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.052 [2024-11-29 13:12:46.683535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.052 [2024-11-29 13:12:46.683542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.052 [2024-11-29 13:12:46.695813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.052 [2024-11-29 13:12:46.696163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.052 [2024-11-29 13:12:46.696180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:47.052 [2024-11-29 13:12:46.696187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:47.052 [2024-11-29 13:12:46.696366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:47.052 [2024-11-29 13:12:46.696545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.052 [2024-11-29 13:12:46.696553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.052 [2024-11-29 13:12:46.696560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.052 [2024-11-29 13:12:46.696566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.052 [2024-11-29 13:12:46.703555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.052 [2024-11-29 13:12:46.708886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:47.052 [2024-11-29 13:12:46.709245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.052 [2024-11-29 13:12:46.709263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:47.052 [2024-11-29 13:12:46.709270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:47.052 [2024-11-29 13:12:46.709449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.052 [2024-11-29 13:12:46.709628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.052 [2024-11-29 13:12:46.709637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.052 [2024-11-29 13:12:46.709644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.052 [2024-11-29 13:12:46.709650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.052 [2024-11-29 13:12:46.722072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.052 [2024-11-29 13:12:46.722501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.052 [2024-11-29 13:12:46.722518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:47.052 [2024-11-29 13:12:46.722526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:47.052 [2024-11-29 13:12:46.722706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:47.052 [2024-11-29 13:12:46.722886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.052 [2024-11-29 13:12:46.722894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.052 [2024-11-29 13:12:46.722901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.052 [2024-11-29 13:12:46.722908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.052 [2024-11-29 13:12:46.735168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.052 [2024-11-29 13:12:46.735567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.052 [2024-11-29 13:12:46.735584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:47.052 [2024-11-29 13:12:46.735592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:47.052 [2024-11-29 13:12:46.735770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:47.052 [2024-11-29 13:12:46.735955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.052 [2024-11-29 13:12:46.735963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.052 [2024-11-29 13:12:46.735974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.052 [2024-11-29 13:12:46.735981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.052 [2024-11-29 13:12:46.748267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.052 [2024-11-29 13:12:46.748688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.052 [2024-11-29 13:12:46.748706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:47.052 [2024-11-29 13:12:46.748714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:47.052 [2024-11-29 13:12:46.748894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:47.052 [2024-11-29 13:12:46.749076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.052 [2024-11-29 13:12:46.749085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.052 [2024-11-29 13:12:46.749092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.052 [2024-11-29 13:12:46.749098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.052 Malloc0 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.052 [2024-11-29 13:12:46.761359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.052 [2024-11-29 13:12:46.761780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.052 [2024-11-29 13:12:46.761798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:47.052 [2024-11-29 13:12:46.761805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:47.052 [2024-11-29 13:12:46.761988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:47.052 [2024-11-29 13:12:46.762168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.052 [2024-11-29 13:12:46.762177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.052 [2024-11-29 13:12:46.762183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.052 [2024-11-29 13:12:46.762190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.052 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.052 [2024-11-29 13:12:46.774459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.052 [2024-11-29 13:12:46.774877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.052 [2024-11-29 13:12:46.774894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceb510 with addr=10.0.0.2, port=4420 00:28:47.053 [2024-11-29 13:12:46.774903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb510 is same with the state(6) to be set 00:28:47.053 [2024-11-29 13:12:46.774938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.053 [2024-11-29 13:12:46.775087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb510 (9): Bad file descriptor 00:28:47.053 [2024-11-29 13:12:46.775267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.053 [2024-11-29 13:12:46.775275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.053 [2024-11-29 13:12:46.775282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.053 [2024-11-29 13:12:46.775288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.053 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.053 13:12:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2146641 00:28:47.053 [2024-11-29 13:12:46.787557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.053 [2024-11-29 13:12:46.811823] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:48.248 4661.67 IOPS, 18.21 MiB/s [2024-11-29T12:12:49.004Z] 5558.14 IOPS, 21.71 MiB/s [2024-11-29T12:12:49.942Z] 6231.62 IOPS, 24.34 MiB/s [2024-11-29T12:12:51.320Z] 6722.11 IOPS, 26.26 MiB/s [2024-11-29T12:12:52.257Z] 7133.00 IOPS, 27.86 MiB/s [2024-11-29T12:12:53.194Z] 7472.18 IOPS, 29.19 MiB/s [2024-11-29T12:12:54.129Z] 7750.42 IOPS, 30.28 MiB/s [2024-11-29T12:12:55.066Z] 7977.31 IOPS, 31.16 MiB/s [2024-11-29T12:12:56.000Z] 8173.93 IOPS, 31.93 MiB/s 00:28:56.180 Latency(us) 00:28:56.180 [2024-11-29T12:12:56.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.181 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:56.181 Verification LBA range: start 0x0 length 0x4000 00:28:56.181 Nvme1n1 : 15.00 8346.75 32.60 10807.46 0.00 6662.65 658.92 16412.49 00:28:56.181 [2024-11-29T12:12:56.001Z] =================================================================================================================== 00:28:56.181 [2024-11-29T12:12:56.001Z] Total : 8346.75 32.60 10807.46 0.00 6662.65 658.92 16412.49 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.440 rmmod nvme_tcp 00:28:56.440 rmmod nvme_fabrics 00:28:56.440 rmmod nvme_keyring 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2147577 ']' 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2147577 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2147577 ']' 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2147577 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2147577 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2147577' 00:28:56.440 killing process with pid 2147577 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2147577 00:28:56.440 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2147577 00:28:56.700 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:56.700 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:56.700 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:56.700 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:56.700 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:56.700 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:56.700 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:56.700 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:56.700 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:56.700 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.700 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.700 13:12:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.235 13:12:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.235 00:28:59.235 real 0m25.595s 00:28:59.235 user 1m1.046s 00:28:59.235 sys 0m6.425s 00:28:59.235 13:12:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.235 13:12:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.235 ************************************ 00:28:59.235 END TEST nvmf_bdevperf 00:28:59.235 ************************************ 00:28:59.235 13:12:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:59.235 13:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:59.235 13:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.235 13:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.235 ************************************ 00:28:59.235 START TEST nvmf_target_disconnect 00:28:59.235 ************************************ 00:28:59.235 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:59.235 * Looking for test storage... 00:28:59.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.235 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:59.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.236 --rc genhtml_branch_coverage=1 00:28:59.236 --rc genhtml_function_coverage=1 00:28:59.236 --rc genhtml_legend=1 00:28:59.236 --rc geninfo_all_blocks=1 00:28:59.236 --rc geninfo_unexecuted_blocks=1 00:28:59.236 00:28:59.236 ' 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:59.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.236 --rc genhtml_branch_coverage=1 00:28:59.236 --rc genhtml_function_coverage=1 00:28:59.236 --rc genhtml_legend=1 00:28:59.236 --rc geninfo_all_blocks=1 00:28:59.236 --rc geninfo_unexecuted_blocks=1 00:28:59.236 00:28:59.236 ' 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:59.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.236 --rc genhtml_branch_coverage=1 00:28:59.236 --rc genhtml_function_coverage=1 00:28:59.236 --rc genhtml_legend=1 00:28:59.236 --rc geninfo_all_blocks=1 00:28:59.236 --rc geninfo_unexecuted_blocks=1 00:28:59.236 00:28:59.236 ' 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:59.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.236 --rc genhtml_branch_coverage=1 00:28:59.236 --rc genhtml_function_coverage=1 00:28:59.236 --rc genhtml_legend=1 00:28:59.236 --rc geninfo_all_blocks=1 00:28:59.236 --rc geninfo_unexecuted_blocks=1 00:28:59.236 00:28:59.236 ' 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:59.236 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:59.237 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.237 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.237 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.237 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.237 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.237 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.237 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.237 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.237 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.237 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.237 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.237 13:12:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:04.503 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:04.503 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.503 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:04.504 Found net devices under 0000:86:00.0: cvl_0_0 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:04.504 Found net devices under 0000:86:00.1: cvl_0_1 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:29:04.504 00:29:04.504 --- 10.0.0.2 ping statistics --- 00:29:04.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.504 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:29:04.504 00:29:04.504 --- 10.0.0.1 ping statistics --- 00:29:04.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.504 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.504 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:04.763 ************************************ 00:29:04.763 START TEST nvmf_target_disconnect_tc1 00:29:04.763 ************************************ 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.763 [2024-11-29 13:13:04.433090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.763 [2024-11-29 13:13:04.433138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa38ac0 with addr=10.0.0.2, port=4420 00:29:04.763 [2024-11-29 13:13:04.433162] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:04.763 [2024-11-29 13:13:04.433176] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:04.763 [2024-11-29 13:13:04.433183] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:04.763 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:04.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:04.763 Initializing NVMe Controllers 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.763 00:29:04.763 real 0m0.109s 00:29:04.763 user 0m0.049s 00:29:04.763 sys 0m0.059s 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.763 ************************************ 00:29:04.763 END TEST nvmf_target_disconnect_tc1 00:29:04.763 ************************************ 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:04.763 ************************************ 00:29:04.763 START TEST nvmf_target_disconnect_tc2 00:29:04.763 ************************************ 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2152741 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2152741 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2152741 ']' 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.763 13:13:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.763 [2024-11-29 13:13:04.578636] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:29:04.763 [2024-11-29 13:13:04.578679] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.022 [2024-11-29 13:13:04.659988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:05.022 [2024-11-29 13:13:04.700214] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.022 [2024-11-29 13:13:04.700251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.022 [2024-11-29 13:13:04.700258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.022 [2024-11-29 13:13:04.700264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.022 [2024-11-29 13:13:04.700268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.022 [2024-11-29 13:13:04.701796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:05.022 [2024-11-29 13:13:04.701904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:05.022 [2024-11-29 13:13:04.701933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:05.022 [2024-11-29 13:13:04.701934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.956 Malloc0 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.956 [2024-11-29 13:13:05.487865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.956 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.957 [2024-11-29 13:13:05.520146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.957 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.957 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:05.957 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.957 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.957 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.957 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2152796 00:29:05.957 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:05.957 13:13:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.870 13:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2152741 00:29:07.870 13:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:07.870 Read completed with error (sct=0, sc=8) 00:29:07.870 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 [2024-11-29 13:13:07.555629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 [2024-11-29 13:13:07.555828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Write completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 [2024-11-29 13:13:07.556031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.871 starting I/O failed 00:29:07.871 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Write completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Write completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Write completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Write completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Write completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Write completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Write completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Write completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Write completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 Read completed with error (sct=0, sc=8) 00:29:07.872 starting I/O failed 00:29:07.872 [2024-11-29 13:13:07.556233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.872 [2024-11-29 13:13:07.556424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.556448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.556611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.556623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.556857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.556889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.557176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.557212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.557421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.557454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.557590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.557622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.557820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.557853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.558040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.558073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.558260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.558293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.558436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.558481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.558640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.558656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.558739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.558783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.558915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.558960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.559075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.559106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.559377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.559410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.559588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.559620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.559782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.559798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.559986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.560019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.560206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.560239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.560433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.560466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.560603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.560636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.560724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.560738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.560892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.560937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.561144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.561176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.561306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.561338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.561526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.561559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.561688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.561720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-29 13:13:07.561910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-29 13:13:07.561942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.562129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.562161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.562357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.562390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.562589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.562621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.562761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.562799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.563635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.563671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.563867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.563900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.564156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.564190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.564380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.564414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.564687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.564718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.564860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.564893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.565103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.565137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.565341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.565373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.565509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.565540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.565682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.565715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.565856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.565887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.566035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.566069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.566336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.566368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.566559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.566591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.566733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.566766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.566877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.566909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.567127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.567160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.567280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.567313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.567443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.567475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.567615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.567648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.567856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.567889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.568098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.568132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.568250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.568283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.568461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.568494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.568682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.568715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.568908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.568941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.569186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.569221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.569352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.569386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.569566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.569598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.569708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.569741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.570013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.570048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.570193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.570209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.570317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.570334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.570570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-29 13:13:07.570603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-29 13:13:07.570814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.570848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.571035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.571070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.571277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.571310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.571566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.571600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.571716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.571749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.571966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.572008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.572263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.572296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.572474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.572508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.572708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.572742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.572982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.573017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.573267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.573300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.573477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.573493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.573721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.573754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.573963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.574007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.574293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.574329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.574592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.574624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.574742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.574757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.574912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.574928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.575023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.575038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.575252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.575268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.575446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.575478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.575670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.575703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.575890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.575923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.576086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.576119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.576298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.576331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.576546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.576578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.576706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.576738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.576858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.576891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.577027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.577060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.577252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.577285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.577538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.577554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.577758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.577774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.577959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.577976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.578139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.578155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.578372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.578404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.578543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.578576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.578780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.578813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.579043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-29 13:13:07.579077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-29 13:13:07.579263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.579295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.579437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.579453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.579589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.579605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.579767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.579800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.580043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.580077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.580202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.580235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.580473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.580489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.580672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.580709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.580827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.580860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.581048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.581082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.581353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.581385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.581565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.581581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.581732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.581748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.581849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.581865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.582009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.582026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.582176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.582209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.582387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.582421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.582532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.582565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.582669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.582702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.582892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.582925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.583130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.583163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.583345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.583379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.583556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.583588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.583773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.583806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.584000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.584035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.584164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.584197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.584440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.584473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.584613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.584646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.584890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.584923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.585075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.585108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.585320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.585353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-29 13:13:07.585487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-29 13:13:07.585520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.585621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.585637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.585781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.585814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.586070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.586157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.586397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.586415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.586595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.586611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.586759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.586792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.587045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.587080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.587344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.587359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.587472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.587504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.587777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.587810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.588074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.588108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.588306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.588340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.588457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.588490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.588682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.588715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.588966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.589001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.589210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.589251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.589490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.589506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.589646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.589662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.589747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.589763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.589920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.589979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.590169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.590202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.590460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.590493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.590636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.590668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.590814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.590848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.591037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.591072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.591317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.591362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.591513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.591530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.591673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.591690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.591845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.591878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.592090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.592124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.592302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.592335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.592522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.592538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.592709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.592742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.592931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.592974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.593104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.593138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.593386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.593419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.876 [2024-11-29 13:13:07.593608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.876 [2024-11-29 13:13:07.593641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.876 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.593865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.593898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.594206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.594240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.594455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.594487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.594679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.594712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.594974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.595009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.595131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.595165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.595378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.595411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.595690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.595724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.595970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.596004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.596220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.596253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.596450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.596484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.596680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.596712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.596912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.596946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.597151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.597184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.597480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.597523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.597720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.597753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.598034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.598068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.598315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.598348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.598523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.598561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.598754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.598788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.598971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.599005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.599221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.599255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.599434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.599451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.599601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.599634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.599775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.599809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.600020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.600054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.600238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.600271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.600402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.600435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.600619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.600652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.600835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.600868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.601113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.601148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.601332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.601365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.601516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.601549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.601792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.601826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.602007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.602041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.602291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.602324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.602518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.602551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.877 [2024-11-29 13:13:07.602793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.877 [2024-11-29 13:13:07.602825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.877 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.602970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.603005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.603145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.603178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.603313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.603346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.603439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.603454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.603552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.603566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.603711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.603727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.603875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.603909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.604119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.604203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.604428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.604466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.604601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.604635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.604771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.604805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.605074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.605108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.605230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.605263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.605407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.605439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.605654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.605687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.605964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.606000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.606262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.606295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.606474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.606506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.606629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.606641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.606783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.606795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.606927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.606942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.607112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.607145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.607279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.607311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.607434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.607467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.607733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.607765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.607890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.607923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.608076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.608108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.608345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.608377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.608582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.608614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.608778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.608789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.609017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.609050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.609183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.609216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.609430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.609463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.609707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.609739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.609935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.609979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.610152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.610185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.610312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.610324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.878 [2024-11-29 13:13:07.610412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.878 [2024-11-29 13:13:07.610423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.878 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.610599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.610632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.610814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.610845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.611046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.611080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.611274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.611307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.611481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.611514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.611704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.611736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.611944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.611986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.612186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.612219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.612465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.612498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.612710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.612782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.612994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.613033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.613260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.613300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.613451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.613467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.613711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.613745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.613965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.613999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.614191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.614224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.614360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.614394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.614527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.614560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.614760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.614774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.614880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.614913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.615062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.615096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.615292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.615325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.615499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.615537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.615748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.615780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.615910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.615942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.616245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.616278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.616402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.616434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.616548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.616581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.616706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.616738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.616992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.617004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.617165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.617178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.617313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.617394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.617664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.617701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.617897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.617931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.618193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.618226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.618413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.618446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.618702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.618736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.879 [2024-11-29 13:13:07.618982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.879 [2024-11-29 13:13:07.619016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.879 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.619189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.619205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.619378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.619409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.619586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.619618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.619736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.619769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.619980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.620012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.620277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.620321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.620480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.620496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.620727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.620744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.620884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.620900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.620998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.621013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.621163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.621204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.621412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.621451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.621584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.621617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.621757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.621790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.621973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.622006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.622251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.622283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.622448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.622464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.622670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.622685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.622825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.622841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.622990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.623007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.623174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.623206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.623325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.623358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.623612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.623645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.623919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.623935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.624043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.624057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.624158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.624174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.624338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.624354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.624539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.624572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.624704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.624737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.624982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.625016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.625277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.625310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.625501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.625535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.625730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.625764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.880 qpair failed and we were unable to recover it. 00:29:07.880 [2024-11-29 13:13:07.626031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.880 [2024-11-29 13:13:07.626066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.626263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.626296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.626540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.626574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.626694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.626709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.626851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.626867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.627018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.627034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.627148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.627181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.627365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.627398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.627501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.627537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.627657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.627698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.627803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.627818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.627988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.628021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.628217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.628253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.628374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.628408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.628651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.628684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.628884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.628918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.629237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.629308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.629556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.629639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.629803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.629816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.629965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.630002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.630252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.630284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.630477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.630510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.630728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.630762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.630945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.630991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.631259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.631293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.631495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.631528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.631650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.631666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.631753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.631768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.631929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.631975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.632217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.632250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.632369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.632402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.632566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.632589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.632677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.632688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.632891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.632903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.633046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.633058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.633118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.633129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.633350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.633362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.633523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.633556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.633756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.881 [2024-11-29 13:13:07.633788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.881 qpair failed and we were unable to recover it. 00:29:07.881 [2024-11-29 13:13:07.633932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.633992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.634237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.634269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.634463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.634497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.634635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.634668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.634802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.634842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.634975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.634987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.635193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.635222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.635453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.635485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.635625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.635656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.635836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.635846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.636014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.636025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.636159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.636169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.636332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.636364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.636488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.636519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.636702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.636733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.636931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.636975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.637189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.637220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.637457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.637467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.637674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.637707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.637837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.637868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.637995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.638034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.638175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.638208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.638465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.638497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.638685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.638696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.638834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.638844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.639088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.639098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.639246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.639256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.639340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.639351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.639483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.639493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.639638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.639648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.639857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.639889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.640028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.640060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.640241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.640273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.640464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.640475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.640616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.640649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.640832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.640862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.641002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.641037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.641222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.882 [2024-11-29 13:13:07.641254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.882 qpair failed and we were unable to recover it. 00:29:07.882 [2024-11-29 13:13:07.641377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.641415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.641507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.641517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.641661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.641672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.641815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.641825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.641901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.641912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.642047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.642058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.642135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.642145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.642366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.642377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.642452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.642462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.642675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.642686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.642766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.642777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.642850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.642860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.643027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.643038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.643111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.643121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.643206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.643216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.643371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.643381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.643522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.643532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.643614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.643624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.643702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.643712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.643858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.643869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.644032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.644066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.644242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.644275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.644404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.644442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.644633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.644665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.644850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.644882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.645074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.645108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.645245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.645277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.645521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.645553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.645739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.645771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.645946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.645990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.646112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.646144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.646287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.646318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.646581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.646613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.646721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.646752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.647015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.647025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.647109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.647118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.647270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.647280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.647446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.647478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.883 [2024-11-29 13:13:07.647671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.883 [2024-11-29 13:13:07.647703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.883 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.647813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.647844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.648138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.648171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.648364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.648395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.648583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.648614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.648725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.648760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.648966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.648978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.649057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.649068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.649144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.649155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.649301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.649311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.649398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.649408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.649612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.649622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.649753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.649763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.649833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.649843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.649929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.649939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.650080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.650091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.650234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.650245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.650377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.650416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.650541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.650573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.650699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.650732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.650921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.650964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.651218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.651249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.651498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.651530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.651638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.651670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.651848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.651886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.652046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.652079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.652248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.652280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.652448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.652459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.652611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.652622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.652854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.652886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.653012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.653046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.653269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.653301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.653561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.653593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.653693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.653724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.884 qpair failed and we were unable to recover it. 00:29:07.884 [2024-11-29 13:13:07.653919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.884 [2024-11-29 13:13:07.653961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.654208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.654241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.654435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.654467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.654596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.654628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.654806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.654816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.655018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.655052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.655297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.655330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.655541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.655574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.655759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.655791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.656065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.656099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.656346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.656378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.656504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.656535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.656658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.656668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.656749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.656760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.656915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.656925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.657170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.657204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.657419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.657452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.657674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.657706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.657852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.657885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.658073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.658106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.658285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.658316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.658496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.658528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.658719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.658730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.658900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.658933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.659161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.659194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.659388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.659420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.659642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.659652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.659745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.659756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.659839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.659849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.659981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.660014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.660198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.660242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.660489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.660521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.660651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.660683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.660977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.661011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.661199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.661231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.661417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.661450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.661624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.661634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.661763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.661773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.885 [2024-11-29 13:13:07.661860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.885 [2024-11-29 13:13:07.661870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.885 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.662100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.662134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.662329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.662361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.662548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.662580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.662727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.662738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.662827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.662837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.662931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.662941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.663190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.663224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.663345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.663377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.663570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.663602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.663813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.663824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.663957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.663967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.664157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.664167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.664321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.664353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.664556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.664588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.664830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.664862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.665062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.665096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.665273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.665306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.665413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.665445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.665686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.665696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.665832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.665842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.666011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.666044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.666293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.666326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.666518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.666555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.666698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.666708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.666921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.666971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.667169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.667202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.667473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.667505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.667717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.667749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.668069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.668103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.668240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.668271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.668465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.668497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.668691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.668728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.668998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.669030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.669144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.669176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.669366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.669398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.669642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.669675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.669867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.669898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.670045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.886 [2024-11-29 13:13:07.670078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.886 qpair failed and we were unable to recover it. 00:29:07.886 [2024-11-29 13:13:07.670285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.670317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.670495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.670526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.670662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.670704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.670902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.670912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.671059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.671069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.671153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.671163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.671250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.671260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.671411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.671421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.671638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.671647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.671874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.671884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.671962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.671976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.672127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.672138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.672223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.672234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.672298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.672308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.672393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.672403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.672562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.672572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.672739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.672771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.672978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.673012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.673132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.673164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.673290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.673324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.673614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.673650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.673740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.673757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.673915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.673930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.674184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.674200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.674385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.674399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.674495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.674509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.674650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.674664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.674817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.674831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.674915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.674930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.675141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.675156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.675318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.675339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.675459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.675475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.675686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.675701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.675787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.675801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.676035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.676051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.676213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.676234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.676401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.676416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.676507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.676522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.887 [2024-11-29 13:13:07.676623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.887 [2024-11-29 13:13:07.676668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.887 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.676841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.676872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.677141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.677173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.677370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.677402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.677542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.677581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.677807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.677824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.678005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.678021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.678175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.678189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.678275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.678289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.678497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.678518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.678677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.678691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.678847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.678862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.679094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.679110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.679277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.679292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.679380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.679395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.679561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.679586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.679798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.679814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.680030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.680047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.680200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.680215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:07.888 [2024-11-29 13:13:07.680363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.888 [2024-11-29 13:13:07.680377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:07.888 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.680601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.680615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.680715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.680730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.680888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.680906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.681007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.681030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.681213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.681235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.681395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.681417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.681707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.681729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.681889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.681902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.681998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.682009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.682218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.682246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.682459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.682491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.682710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.682742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.682939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.682952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.683122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.683154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.683278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.683310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.683503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.683534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.683734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.683752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.683856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.683884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.684099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.684133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.684266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.684298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.684428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.684459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.684762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.684777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.684868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.684882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-29 13:13:07.685023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-29 13:13:07.685039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.685198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.685212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.685377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.685408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.685533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.685564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.685772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.685804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.685994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.686009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.686178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.686209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.686447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.686479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.686652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.686683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.686872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.686886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.687103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.687136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.687408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.687440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.687690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.687722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.687853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.687886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.688084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.688117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.688327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.688359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.688564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.688596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.688706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.688744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.688895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.688909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.689090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.689123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.689339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.689384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.689602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.689633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.689819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.689833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.689969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.690008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.690184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.690216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.690346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.690379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.690621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.690635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.690847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.690878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.691067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.691100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.691244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.691274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.691409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.691440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.691643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.691675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.691890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.691922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.692180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.692212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.692442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.692474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.692667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.692698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.692911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.692943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.693159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.693191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.693319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.693350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-29 13:13:07.693620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-29 13:13:07.693652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.693866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.693898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.694070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.694127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.694379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.694411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.694610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.694642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.694836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.694850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.695103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.695136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.695276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.695308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.695447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.695485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.695617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.695631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.695740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.695755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.695832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.695846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.695988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.696003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.696109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.696123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.696279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.696293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.696449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.696463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.696704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.696736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.696930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.696994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.697131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.697163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.697364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.697396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.697658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.697689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.697811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.697825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.697985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.698000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.698155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.698170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.698254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.698268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.698362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.698377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.698623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.698637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.698788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.698802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.698892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.698907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.699123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.699155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.699358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.699389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.699528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.699559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.699798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.699812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.699975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.699999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.700220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.700252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.700446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.700478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.700664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.700695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.700821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.700853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.701068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-29 13:13:07.701101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-29 13:13:07.701309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.701340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.701585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.701616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.701802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.701816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.701979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.702012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.702142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.702174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.702349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.702380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.702576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.702590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.702773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.702805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.702998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.703030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.703305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.703337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.703591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.703623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.703800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.703831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.704019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.704033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.704213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.704227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.704316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.704330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.704490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.704522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.704648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.704680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.704800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.704831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.705102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.705135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.705329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.705360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.705552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.705584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.705769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.705800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.705980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.706012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.706189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.706220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.706416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.706448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.706575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.706607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.706787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.706800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.706974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.706989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.707216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.707247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.707421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.707452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.707661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.707691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.707883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.707898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.708095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.708110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.708333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.708363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.708634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.708666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.708935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.709008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.709257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.709290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.709579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.709630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-29 13:13:07.709842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-29 13:13:07.709877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.710120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.710135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.710293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.710307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.710513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.710530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.710605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.710620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.710701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.710716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.710790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.710805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.711060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.711094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.711225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.711256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.711395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.711426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.711622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.711653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.711850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.711882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.712123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.712138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.712298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.712330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.712521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.712553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.712744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.712775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.712961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.712977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.713148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.713163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.713355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.713393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.713515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.713546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.713732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.713763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.713978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.714016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.714229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.714243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.714355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.714369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.714604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.714635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.714882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.714913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.715198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.715240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.715377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.715408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.715599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.715613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.715849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.715881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.716141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.716174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-29 13:13:07.716314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-29 13:13:07.716345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.716478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.716510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.716709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.716740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.716868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.716882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.717120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.717154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.717280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.717311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.717488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.717519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.717706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.717721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.717861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.717875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.718055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.718070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.718309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.718341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.718606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.718637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.718811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.718842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.719040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.719073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.719327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.719358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.719465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.719496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.719742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.719784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.719992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.720007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.720149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.720163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.720307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.720349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.720548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.720579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.720774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.720805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.720981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.720999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.721088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.721102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.721252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.721266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.721353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.721367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.721519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.721533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.721630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.721658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.721846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.721878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.722070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.722103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.722227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.722258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.722437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.722469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.722607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.722639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.722828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.722859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.722988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.723021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.723208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.723239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.723555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.723626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.723771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-29 13:13:07.723807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-29 13:13:07.724011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.724046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.724253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.724285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.724480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.724513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.724709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.724740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.724980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.724995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.725235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.725250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.725408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.725422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.725585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.725616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.725808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.725840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.726084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.726117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.726364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.726395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.726527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.726569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.726750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.726782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.727019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.727034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.727258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.727273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.727369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.727384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.727590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.727605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.727772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.727786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.727996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.728030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.728169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.728201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.728448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.728479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.728727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.728759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.728964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.728998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.729174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.729207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.729412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.729445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.729629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.729663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.729910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.729942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.730160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.730192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.730325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.730357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.730551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.730583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.730776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.730807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.730978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.730993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.731100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.731116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.731307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.731322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.731425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.731440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.731593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.731630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.731821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.731853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.732045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.732080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-29 13:13:07.732287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-29 13:13:07.732320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.732423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.732456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.732644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.732675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.732854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.732887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.733072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.733088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.733251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.733289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.733422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.733454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.733647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.733678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.733891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.733923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.734134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.734167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.734343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.734376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.734571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.734603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.734884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.734916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.735139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.735178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.735400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.735432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.735646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.735678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.735941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.735961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.736185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.736200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.736362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.736377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.736529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.736543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.736685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.736700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.736843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.736857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.737021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.737053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.737160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.737192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.737387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.737418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.737553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.737583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.737733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.737747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.737833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.737847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.737952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.737966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.738061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.738075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.738185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.738200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.738359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.738398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.738569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.738600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.738793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.738827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.739068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.739083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.739277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.739291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.739387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.739401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.739473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.739487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.739692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-29 13:13:07.739707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-29 13:13:07.739846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.739860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.740055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.740126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.740304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.740339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.740486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.740519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.740650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.740683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.740945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.740992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.741190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.741201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.741340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.741350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.741493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.741503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.741672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.741704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.741915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.741959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.742154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.742187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.742467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.742499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.742621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.742652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.742745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.742758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.742979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.743013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.743131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.743163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.743296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.743327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.743525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.743557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.743675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.743707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.743962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.743995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.744178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.744210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.744496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.744506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.744596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.744606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.744759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.744770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.744906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.744917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.745113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.745124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.745205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.745215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.745322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.745354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.745483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.745514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.745707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.745739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.745867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.745877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.746088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.746099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.746175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.746185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.746318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.746329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.746397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.746408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.746631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.746663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.746796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.746827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-29 13:13:07.746946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-29 13:13:07.746989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.747184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.747194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.747332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.747342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.747444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.747454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.747666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.747698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.747962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.747996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.748178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.748210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.748409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.748441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.748695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.748705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.748789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.748799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.748971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.748981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.749072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.749082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.749317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.749349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.749547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.749578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.749720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.749730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.749822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.749832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.749901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.749913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.750046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.750057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.750130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.750140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.750291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.750302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.750366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.750376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.750518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.750528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.750694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.750704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.750856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.750888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.751074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.751107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.751252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.751284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.751466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.751477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.751613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.751623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.751774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.751785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.751938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.751979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.752163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.752195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.752381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.752414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.752549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.752581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-29 13:13:07.752711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-29 13:13:07.752721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.752972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.753005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.753215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.753246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.753489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.753521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.753699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.753731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.753859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.753891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.754098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.754109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.754234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.754244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.754382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.754393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.754594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.754605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.754697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.754707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.754865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.754876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.754957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.754968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.755101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.755112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.755243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.755253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.755366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.755398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.755518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.755550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.755730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.755762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.755934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.755944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.756123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.756156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.756344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.756376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.756520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.756553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.756684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.756695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.756839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.756851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.756998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.757009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.757150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.757161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.757369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.757401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.757592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.757624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.757896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.757927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.758059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.758091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.758235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.758268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.758394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.758426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.758633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.758665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.758861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.758893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.759055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.759066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.759240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.759250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.759403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.759435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.759648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.759680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-29 13:13:07.759813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-29 13:13:07.759844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.760073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.760083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.760172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.760182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.760338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.760348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.760422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.760432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.760523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.760533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.760690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.760722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.760849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.760881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.761017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.761050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.761228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.761260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.761551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.761583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.761832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.761843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.762012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.762047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.762228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.762262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.762435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.762471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.762615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.762647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.762833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.762865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.763108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.763123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.763271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.763285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.763368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.763382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.763457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.763471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.763580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.763611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.763856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.763886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.764118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.764151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.764290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.764320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.764446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.764477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.764761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.764792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.764987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.765018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.765204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.765234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.765349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.765380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.765596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.765626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.765844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.765875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.766003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.766020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.766186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.766201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.766288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.766302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.766481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.766494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.766641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.766672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.766800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.766831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.766968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.767001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-29 13:13:07.767182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-29 13:13:07.767199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.767274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.767287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.767400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.767430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.767570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.767601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.767866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.767896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.768096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.768111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.768271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.768285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.768382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.768396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.768494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.768507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.768647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.768661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.768758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.768772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.768940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.768985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.769174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.769206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.769329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.769359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.769590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.769621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.769803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.769835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.770031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.770045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.770203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.770217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.770377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.770391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.770534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.770548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.770730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.770762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.770895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.770926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.771075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.771107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.771276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.771307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.771560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.771592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.771733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.771747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.771905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.771936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.772246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.772284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.772420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.772451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.772642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.772674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.772967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.772999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.773231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.773245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.773396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.773410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.773559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.773573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.773710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.773724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.773888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.773919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.774203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.774235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.774424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.774455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.774702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.774733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-29 13:13:07.774928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-29 13:13:07.774970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.775219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.775233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.775440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.775471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.775600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.775631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.775807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.775837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.776049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.776063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.776141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.776182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.776367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.776398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.776596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.776626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.776844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.776876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.777157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.777172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.777314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.777344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.777613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.777644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.777791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.777805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.778027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.778042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.778200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.778236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.778440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.778472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.778609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.778639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.778902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.778917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.779033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.779065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.779241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.779272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.779467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.779499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.779631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.779662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.779849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.779881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.780139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.780172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.780391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.780405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.780543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.780557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.780742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.780773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.780908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.780939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.781123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.781188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.781398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.781415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.781591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.781606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.781747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.781780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.781915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.781962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.782261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.782294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.782402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.782434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.782659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.782691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.782930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-29 13:13:07.782975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-29 13:13:07.783197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.783211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.783387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.783401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.783641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.783673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.783850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.783881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.784041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.784086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.784274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.784289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.784446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.784477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.784688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.784721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.784899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.784942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.785097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.785112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.785367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.785382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.785537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.785568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.785762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.785793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.785985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.786018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.786276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.786290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.786512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.786526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.786733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.786748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.786890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.786904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.787071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.787105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.787242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.787275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.787455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.787487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.787754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.787786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.787987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.788003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.788089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.788103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.788330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.788362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.788553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.788586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.788776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.788790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.788885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.788899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.789109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.789125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.789297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.789312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.789404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.789418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.789509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.789523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.789666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.789680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.789911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.789926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.790078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-29 13:13:07.790093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-29 13:13:07.790255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.790287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.790411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.790441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.790586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.790618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.790863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.790895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.791097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.791131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.791357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.791389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.791526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.791558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.791738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.791769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.791985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.792019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.792231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.792248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.792412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.792443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.792667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.792699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.792895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.792933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.793105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.793119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.793242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.793275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.793537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.793568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.793815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.793847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.794051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.794085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.794276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.794307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.794597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.794630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.794879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.794911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.795046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.795080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.795241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.795255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.795355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.795369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.795576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.795590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.795732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.795764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.796032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.796047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.796149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.796181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.796372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.796405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.796649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.796682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.796898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.796930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.797186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.797218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.797495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.797528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.797661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.797693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.797902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.797934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.798199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.798214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.798315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.798356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.798559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.798591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-29 13:13:07.798716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-29 13:13:07.798748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.798931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.798945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.799092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.799107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.799250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.799265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.799364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.799378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.799587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.799602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.799764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.799797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.799992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.800025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.800159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.800191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.800356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.800370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.800524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.800556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.800672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.800710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.800903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.800935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.801066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.801098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.801257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.801271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.801373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.801413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.801616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.801648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.801893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.801924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.802140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.802174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.802424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.802455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.802589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.802603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.802752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.802767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.802908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.802922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.803067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.803082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.803224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.803239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.803403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.803417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.803627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.803658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.803912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.803944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.804116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.804131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.804307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.804321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.804555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.804588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.804730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.804762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.804970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.805004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.805121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.805135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.805346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.805378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.805590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.805623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.805810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.805843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.806026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.806042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-29 13:13:07.806261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-29 13:13:07.806294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.806547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.806579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.806862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.806899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.807055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.807070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.807249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.807282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.807392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.807423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.807536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.807568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.807841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.807874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.807987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.808020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.808208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.808240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.808387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.808421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.808715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.808748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.808887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.808919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.809067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.809101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.809230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.809261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.809503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.809518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.809701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.809717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.809818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.809832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.809914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.809929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.810097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.810113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.810200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.810215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.810370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.810384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.810501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.810534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.810778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.810810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.810963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.810998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.811197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.811212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.811324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.811339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.811540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.811573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.811776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.811809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.812001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.812017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.812099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.812114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.812273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.812288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.812386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.812425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.812553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.812587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.812766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.812799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.813005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.813047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.813206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.813221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.813330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.813345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.813438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.188 [2024-11-29 13:13:07.813453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.188 qpair failed and we were unable to recover it. 00:29:08.188 [2024-11-29 13:13:07.813605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.813637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.813762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.813801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.813914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.813955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.814072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.814103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.814282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.814297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.814386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.814401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.814496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.814529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.814649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.814682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.814864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.814895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.815030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.815045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.815155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.815170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.815313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.815329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.815435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.815450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.815684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.815717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.815842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.815874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.816016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.816051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.816250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.816264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.816410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.816424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.816567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.816582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.816737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.816769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.816909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.816942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.817096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.817128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.817399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.817414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.817571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.817586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.817678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.817721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.817859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.817892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.818084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.818117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.818399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.818414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.818574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.818589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.818765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.818780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.819000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.819015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.819110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.819124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.819282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.819296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.189 [2024-11-29 13:13:07.819446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.189 [2024-11-29 13:13:07.819479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.189 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.819601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.819634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.819843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.819875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.820087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.820102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.820265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.820280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.820379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.820393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.820565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.820580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.820723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.820738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.820895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.820913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.821006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.821053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.821241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.821275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.821402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.821435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.821634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.821667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.821816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.821848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.821973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.822005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.822186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.822219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.822415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.822430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.822545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.822576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.822844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.822876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.823074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.823108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.823239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.823253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.823415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.823430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.823519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.823534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.823680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.823695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.823777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.823792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.823968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.823983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.824140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.824154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.824306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.824320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.824408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.824423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.824576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.824608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.824808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.824839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.825047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.825081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.825187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.825202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.825294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.825309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.825541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.825573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.825709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.825741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.825927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.825971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.826172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.826204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.826346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.190 [2024-11-29 13:13:07.826361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.190 qpair failed and we were unable to recover it. 00:29:08.190 [2024-11-29 13:13:07.826613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.826643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.826855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.826888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.827103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.827137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.827271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.827285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.827390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.827405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.827563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.827578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.827762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.827777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.828039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.828056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.828246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.828261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.828483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.828500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.828597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.828612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.828765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.828779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.828916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.828930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.829032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.829048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.829220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.829234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.829328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.829343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.829468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.829501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.829610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.829643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.829847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.829879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.830061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.830082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.830155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.830170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.830312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.830326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.830471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.830486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.830579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.830593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.830828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.830843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.830941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.830960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.831063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.831077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.831293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.831308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.831452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.831467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.831632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.831669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.831832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.831863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.832118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.832152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.832347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.832379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.832557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.832589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.832884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.832916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.833106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.833140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.833389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.833421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.833677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.833709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.191 qpair failed and we were unable to recover it. 00:29:08.191 [2024-11-29 13:13:07.833997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.191 [2024-11-29 13:13:07.834017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.834091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.834105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.834333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.834366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.834540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.834572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.834832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.834865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.835014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.835030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.835115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.835129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.835357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.835391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.835514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.835545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.835685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.835717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.835854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.835885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.836046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.836098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.836262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.836276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.836437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.836452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.836532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.836548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.836638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.836652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.836792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.836807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.836976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.837016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.837200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.837232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.837450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.837482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.837666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.837698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.837944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.837990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.838188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.838203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.838316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.838338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.838528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.838563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.838787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.838825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.839031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.839046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.839159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.839174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.839336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.839350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.839506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.839521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.839673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.839688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.839773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.839788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.840012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.840048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.840195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.840232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.840412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.840444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.840634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.840669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.840942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.840991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.841194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.841226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.841464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.192 [2024-11-29 13:13:07.841500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.192 qpair failed and we were unable to recover it. 00:29:08.192 [2024-11-29 13:13:07.841727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.841759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.841960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.841995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.842221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.842253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.842427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.842442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.842610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.842643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.842863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.842896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.843101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.843143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.843243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.843258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.843415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.843445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.843636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.843669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.843782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.843814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.843936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.844004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.844152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.844170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.844243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.844258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.844360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.844374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.844467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.844482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.844583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.844597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.844704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.844719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.844944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.844965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.845106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.845122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.845231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.845246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.845335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.845350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.845591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.845625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.845827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.845860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.846100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.846116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.846295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.846329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.846483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.846517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.846651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.846684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.846818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.846850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.847114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.847148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.847273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.847305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.847424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.847439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.847592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.847607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.847767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.847781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.848001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.848036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.848243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.848277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.848569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.848601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.848735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.848767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.193 qpair failed and we were unable to recover it. 00:29:08.193 [2024-11-29 13:13:07.848889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.193 [2024-11-29 13:13:07.848922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.849060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.849094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.849284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.849305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.849395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.849409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.849568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.849583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.849743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.849775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.849921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.849964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.850150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.850182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.850301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.850315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.850495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.850510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.850672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.850687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.850882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.850917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.851118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.851134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.851219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.851233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.851419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.851457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.851669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.851701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.851894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.851909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.852133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.852149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.852236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.852251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.852465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.852479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.852654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.852669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.852818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.852850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.852988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.853022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.853160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.853192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.853416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.853450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.853706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.853737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.853925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.853974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.854112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.854126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.854214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.854229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.854307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.854321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.854515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.854548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.854660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.854692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.854823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.854855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.854994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.194 [2024-11-29 13:13:07.855028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.194 qpair failed and we were unable to recover it. 00:29:08.194 [2024-11-29 13:13:07.855208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.855222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.855452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.855466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.855568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.855583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.855673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.855688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.855779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.855793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.855963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.855979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.856214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.856228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.856387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.856402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.856485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.856522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.856651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.856683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.856931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.856987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.857128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.857161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.857304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.857319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.857493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.857529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.857677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.857709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.857841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.857873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.858070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.858085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.858295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.858310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.858407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.858440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.858585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.858622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.858843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.858888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.859175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.859212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.859335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.859368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.859653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.859688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.859880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.859912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.860071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.860108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.860293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.860309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.860422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.860437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.860540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.860555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.860718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.860733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.860832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.860847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.860936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.860955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.861053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.861068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.861146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.861160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.861325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.861358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.861487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.861522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.861710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.861743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.861869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.861902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.195 qpair failed and we were unable to recover it. 00:29:08.195 [2024-11-29 13:13:07.862032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.195 [2024-11-29 13:13:07.862066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.862281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.862318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.862433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.862448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.862595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.862610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.862777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.862791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.862973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.862989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.863080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.863095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.863180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.863195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.863287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.863301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.863461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.863495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.863626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.863658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.863765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.863797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.863991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.864025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.864141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.864156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.864233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.864248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.864427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.864443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.864548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.864582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.864771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.864805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.865009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.865044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.865281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.865296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.865448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.865463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.865550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.865565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.865798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.865816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.866015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.866031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.866199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.866214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.866306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.866321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.866414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.866428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.866522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.866537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.866616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.866630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.866770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.866785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.866928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.866944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.867099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.867114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.867203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.867218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.867294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.867310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.867455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.867470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.867635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.867673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.867815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.867848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.868053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.868088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.196 [2024-11-29 13:13:07.868279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.196 [2024-11-29 13:13:07.868312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.196 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.868451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.868465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.868568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.868583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.868748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.868779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.868982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.869016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.869208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.869243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.869420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.869435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.869591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.869606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.869744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.869777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.869967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.870002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.870129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.870162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.870351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.870387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.870577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.870613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.870813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.870846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.870981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.871014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.871198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.871229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.871378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.871413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.871565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.871580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.871655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.871668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.871839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.871875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.871974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.872002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.872092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.872108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.872192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.872206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.872438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.872453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.872629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.872649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.872809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.872824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.872938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.872966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.873041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.873055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.873138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.873152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.873230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.873271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.873453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.873484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.873618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.873650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.873880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.873912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.874051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.874086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.874276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.874309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.874581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.874595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.874740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.874754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.874912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.874965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.875224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.875256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.197 qpair failed and we were unable to recover it. 00:29:08.197 [2024-11-29 13:13:07.875385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.197 [2024-11-29 13:13:07.875416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.875602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.875641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.875922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.875968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.876103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.876118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.876224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.876239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.876384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.876399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.876554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.876586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.876726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.876757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.876975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.877010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.877226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.877241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.877395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.877427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.877678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.877709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.877901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.877933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.878161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.878176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.878346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.878378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.878633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.878667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.878782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.878815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.879008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.879042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.879286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.879318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.879545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.879559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.879710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.879726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.879889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.879920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.880197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.880231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.880416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.880430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.880574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.880588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.880691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.880705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.880865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.880882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.881054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.881069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.881231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.881262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.881465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.881497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.881627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.881666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.881845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.881877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.882011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.882047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.198 [2024-11-29 13:13:07.882221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.198 [2024-11-29 13:13:07.882235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.198 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.882399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.882430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.882679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.882712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.882908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.882941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.883163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.883197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.883409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.883442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.883643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.883677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.883971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.884019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.884124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.884139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.884395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.884426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.884611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.884642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.884897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.884930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.885085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.885100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.885263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.885299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.885428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.885460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.885726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.885758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.885882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.885913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.886145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.886161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.886340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.886372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.886560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.886592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.886735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.886774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.886977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.887010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.887136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.887168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.887296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.887336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.887433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.887460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.887635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.887670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.887868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.887901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.888040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.888072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.888327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.888341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.888496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.888527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.888710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.888741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.888988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.889020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.889245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.889261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.889362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.889392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.889521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.889552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.889689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.889721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.889903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.889934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.890145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.890178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.890372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.890388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.199 qpair failed and we were unable to recover it. 00:29:08.199 [2024-11-29 13:13:07.890468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.199 [2024-11-29 13:13:07.890481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.890598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.890630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.890860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.890894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.891092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.891125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.891230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.891245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.891406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.891451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.891596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.891629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.891758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.891790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.892040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.892079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.892265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.892299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.892564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.892596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.892798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.892830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.892983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.893017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.893244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.893286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.893388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.893403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.893658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.893692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.893877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.893910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.894141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.894174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.894314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.894329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.894417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.894431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.894524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.894538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.894628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.894643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.894781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.894796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.894964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.894979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.895218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.895242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.895397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.895412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.895618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.895650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.895788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.895819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.895932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.895977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.896223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.896238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.896326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.896340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.896444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.896459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.896613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.896627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.896878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.896909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.897150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.897183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.897389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.897422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.897615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.897630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.897850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.897882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.898093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.200 [2024-11-29 13:13:07.898128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.200 qpair failed and we were unable to recover it. 00:29:08.200 [2024-11-29 13:13:07.898332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.898363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.898540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.898572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.898819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.898852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.899037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.899070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.899208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.899240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.899481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.899513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.899656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.899688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.899871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.899904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.900186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.900221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.900357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.900389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.900521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.900553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.900861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.900894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.901107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.901122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.901268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.901283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.901527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.901559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.901828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.901861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.901983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.902018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.902289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.902323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.902499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.902514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.902618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.902633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.902812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.902826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.902979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.902994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.903151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.903184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.903326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.903358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.903484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.903517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.903696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.903729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.903863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.903895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.904044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.904077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.904320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.904352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.904492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.904506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.904589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.904604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.904810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.904826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.905001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.905016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.905201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.905233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.905420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.905452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.905648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.905678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.905871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.905904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.906099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.906117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.906278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.201 [2024-11-29 13:13:07.906293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.201 qpair failed and we were unable to recover it. 00:29:08.201 [2024-11-29 13:13:07.906450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.906482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.906668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.906699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.906886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.906918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.907202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.907236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.907445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.907459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.907627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.907642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.907727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.907741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.907909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.907924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.908024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.908039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.908217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.908248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.908372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.908404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.908581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.908614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.908874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.908907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.909123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.909157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.909338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.909370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.909563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.909578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.909769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.909802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.909993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.910027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.910271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.910304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.910494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.910509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.910730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.910762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.910941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.910982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.911126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.911140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.911283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.911297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.911397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.911411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.911572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.911611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.911825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.911857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.912033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.912067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.912213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.912244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.912403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.912417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.912687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.912709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.912793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.912807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.912964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.912980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.913120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.913135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.913300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.913332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.913479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.913512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.913654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.913686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.913823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.202 [2024-11-29 13:13:07.913855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.202 qpair failed and we were unable to recover it. 00:29:08.202 [2024-11-29 13:13:07.913981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.914022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.914241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.914257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.914416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.914431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.914533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.914548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.914681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.914696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.914868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.914900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.915055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.915088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.915266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.915298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.915555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.915570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.915666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.915698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.915846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.915879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.916005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.916039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.916167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.916199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.916310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.916325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.916528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.916567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.916693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.916726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.916989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.917022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.917246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.917279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.917422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.917454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.917593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.917607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.917687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.917700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.917913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.917962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.918219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.918234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.918428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.918442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.918609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.918624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.918796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.918828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.918962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.918996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.919212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.919244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.919472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.919507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.919623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.919657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.919830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.919846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.919964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.919981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.920161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.920176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.920348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.920362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.920459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.920474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.920566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.920581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.920663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.203 [2024-11-29 13:13:07.920677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.203 qpair failed and we were unable to recover it. 00:29:08.203 [2024-11-29 13:13:07.920775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.920808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.921059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.921094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.921232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.921264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.921390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.921403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.921480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.921500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.921647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.921680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.921888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.921920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.922072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.922111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.922296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.922310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.922415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.922431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.922573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.922587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.922671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.922685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.922762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.922778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.922857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.922871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.922963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.922978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.923051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.923065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.923224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.923255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.923385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.923418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.923610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.923643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.923777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.923810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.924013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.924049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.924271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.924306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.924502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.924516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.924612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.924628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.924773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.924789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.924986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.925022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.925270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.925302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.925471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.925492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.925582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.925597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.925855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.925870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.926043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.926060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.926247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.926289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.926512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.926545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.926749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.926782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.926966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.927000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.927297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.927329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.927520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.927536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.927754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.927786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.927919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.927964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.928146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.928179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.928313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.928327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.928470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.204 [2024-11-29 13:13:07.928485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.204 qpair failed and we were unable to recover it. 00:29:08.204 [2024-11-29 13:13:07.928643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.928659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.928767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.928799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.928924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.928983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.929132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.929164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.929364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.929397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.929597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.929630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.929827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.929859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.930115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.930150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.930273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.930305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.930499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.930513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.930633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.930664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.930850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.930883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.931087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.931121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.931250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.931265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.931399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.931415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.931571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.931585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.931741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.931779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.932037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.932070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.932272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.932303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.932495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.932527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.932659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.932691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.932835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.932868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.933053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.933087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.933229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.933261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.933467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.933482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.933572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.933587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.933875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.933906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.934108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.934142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.934288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.934320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.934508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.934523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.934612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.934629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.934837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.934852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.934964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.934979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.935054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.935069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.935155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.935170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.935249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.935264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.935418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.935451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.935708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.935740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.935959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.935993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.936240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.936272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.936447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.936480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.936609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.205 [2024-11-29 13:13:07.936624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.205 qpair failed and we were unable to recover it. 00:29:08.205 [2024-11-29 13:13:07.936772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.936786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.936998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.937013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.937189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.937204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.937284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.937298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.937379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.937394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.937501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.937516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.937619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.937652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.937883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.937915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.938169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.938202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.938314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.938349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.938610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.938642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.938771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.938802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.938941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.938985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.939231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.939263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.939462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.939477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.939643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.939680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.939878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.939909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.940132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.940166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.940350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.940364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.940524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.940560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.940741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.940773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.940990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.941023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.941215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.941247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.941427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.941459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.941584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.941598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.941765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.941780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.941926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.941941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.942106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.942121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.942209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.942223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.942382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.942397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.942620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.942645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.942721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.942735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.942810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.942830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.942983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.943017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.943138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.943170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.943384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.943417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.943599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.943614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.943710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.943724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.943842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.943856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.944013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.944028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.944121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.944137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.944306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.944321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.944530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.944548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.944630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.944644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.206 [2024-11-29 13:13:07.944835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.206 [2024-11-29 13:13:07.944868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.206 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.944994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.945027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.945222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.945253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.945477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.945512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.945784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.945816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.945939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.945983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.946184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.946198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.946290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.946305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.946535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.946549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.946723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.946737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.946964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.946999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.947127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.947174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.947272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.947287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.947441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.947456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.947550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.947565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.947658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.947672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.947866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.947880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.948027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.948060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.948277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.948308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.948487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.948521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.948717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.948732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.948826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.948841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.949023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.949039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.949144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.949159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.949302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.949317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.949521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.949536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.949623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.949638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.949741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.949756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.949921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.949936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.950128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.950143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.950291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.950306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.950379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.950394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.950488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.950503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.950644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.950659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.950880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.950911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.951061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.951094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.951207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.951247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.951377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.951407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.951517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.951531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.207 [2024-11-29 13:13:07.951715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.207 [2024-11-29 13:13:07.951730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.207 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.951876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.951909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.952063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.952096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.952344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.952377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.952555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.952570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.952644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.952659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.952802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.952816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.952890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.952904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.952978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.952994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.953204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.953219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.953477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.953511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.953787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.953819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.953942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.953986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.954176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.954191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.954277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.954292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.954455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.954470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.954546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.954560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.954642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.954658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.954813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.954843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.955039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.955073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.955286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.955319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.955650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.955682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.955891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.955924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.956064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.956096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.956244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.956275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.956471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.956503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.956736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.956770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.956968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.956988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.957102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.957133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.957317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.957349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.957530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.957563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.957804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.957818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.957978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.958013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.958285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.958318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.958492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.958524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.958725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.958757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.959017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.959050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.959182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.959197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.959276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.959290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.959512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.959544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.959721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.959753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.959981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.960014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.960231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.960262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.960513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.960548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.960750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.960781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.960997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.208 [2024-11-29 13:13:07.961030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.208 qpair failed and we were unable to recover it. 00:29:08.208 [2024-11-29 13:13:07.961293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.961309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.961454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.961469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.961657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.961672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.961932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.961985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.962141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.962189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.962404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.962438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.962701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.962716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.962854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.962869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.963054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.963076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.963241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.963255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.963414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.963429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.963517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.963553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.963753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.963785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.963915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.963959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.964146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.964192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.964426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.964457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.964690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.964705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.964850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.964864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.965008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.965026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.965125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.965140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.965248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.965280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.965472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.965505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.965720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.965752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.966018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.966052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.966189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.966231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.966451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.966469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.966653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.966669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.209 [2024-11-29 13:13:07.966776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.209 [2024-11-29 13:13:07.966791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.209 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.966997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.967014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.967185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.967199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.967364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.967379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.967556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.967571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.967651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.967666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.967831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.967852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.967980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.968003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.968180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.968206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.968357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.968380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.968617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.968639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.968792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.968813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.969027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.969050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.969240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.969262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.969428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.969450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.969611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.969631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.969827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.969844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.969991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.970007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.970183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.970198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.970283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.970299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.970400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.970415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.970526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.970541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.970705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.970721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.970806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.489 [2024-11-29 13:13:07.970821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.489 qpair failed and we were unable to recover it. 00:29:08.489 [2024-11-29 13:13:07.971031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.971047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.971229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.971243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.971365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.971380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.971474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.971489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.971571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.971586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.971734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.971748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.971967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.971983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.972080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.972095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.972245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.972260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.972358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.972372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.972571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.972603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.972769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.972801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.973009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.973043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.973234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.973265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.973382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.973397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.973607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.973621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.973784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.973799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.973972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.974006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.974256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.974289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.974400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.974432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.974602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.974617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.974709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.974724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.974864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.974879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.974976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.974991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.975170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.975203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.975385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.975422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.975680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.975713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.975896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.975929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.976089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.976121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.976414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.976446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.976658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.976692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.976881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.976914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.977040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.977073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.977353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.977386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.977574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.977590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.977694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.977708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.977793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.977807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.977912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.977926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.490 [2024-11-29 13:13:07.978082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.490 [2024-11-29 13:13:07.978097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.490 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.978254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.978286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.978431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.978464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.978650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.978681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.978928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.978974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.979158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.979191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.979413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.979445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.979575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.979596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.979735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.979750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.979961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.979978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.980165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.980179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.980395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.980429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.980704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.980738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.980851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.980885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.981013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.981057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.981242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.981275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.981476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.981491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.981659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.981673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.981776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.981792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.982010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.982044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.982236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.982270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.982404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.982436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.982572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.982587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.982813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.982844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.983041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.983088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.983208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.983240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.983455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.983470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.983749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.983782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.984060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.984094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.984331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.984363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.984512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.984526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.984666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.984681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.984829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.984844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.984939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.984957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.985059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.985091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.985307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.985340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.985474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.985506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.985702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.985717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.985804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.985818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.491 qpair failed and we were unable to recover it. 00:29:08.491 [2024-11-29 13:13:07.986056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.491 [2024-11-29 13:13:07.986072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.986147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.986161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.986253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.986272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.986413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.986428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.986642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.986674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.986871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.986904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.987099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.987132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.987252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.987295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.987465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.987480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.987564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.987578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.987729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.987744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.987820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.987834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.987908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.987922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.988102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.988118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.988199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.988214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.988432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.988464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.988741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.988774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.989008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.989042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.989173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.989205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.989355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.989369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.989580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.989595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.989702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.989716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.989876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.989917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.990048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.990081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.990201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.990233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.990479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.990511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.990687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.990701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.990797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.990811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.990975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.990990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.991200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.991215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.991357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.991372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.991533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.991567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.991752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.991783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.991905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.991936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.992150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.992182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.992470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.992501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.992629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.992660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.992781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.992814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.992996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.993029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.492 qpair failed and we were unable to recover it. 00:29:08.492 [2024-11-29 13:13:07.993165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.492 [2024-11-29 13:13:07.993196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.993332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.993364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.993502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.993517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.993607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.993621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.993732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.993767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.993938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.993961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.994112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.994127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.994337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.994352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.994449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.994464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.994641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.994673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.994910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.994942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.995140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.995172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.995274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.995288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.995481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.995513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.995641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.995673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.995896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.995927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.996119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.996151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.996264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.996305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.996495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.996526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.996726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.996740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.996979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.997012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.997124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.997155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.997364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.997379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.997592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.997623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.997759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.997790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.997993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.998026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.998169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.998201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.998387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.998420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.998552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.998566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.998778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.998792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.998958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.998973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.999121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.999136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.493 [2024-11-29 13:13:07.999356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.493 [2024-11-29 13:13:07.999387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.493 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:07.999634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:07.999665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:07.999867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:07.999899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.000036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.000070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.000210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.000241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.000425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.000456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.000645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.000677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.000858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.000889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.001146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.001161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.001231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.001246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.001502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.001534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.001781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.001812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.001966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.002005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.002188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.002221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.002503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.002536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.002724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.002756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.002934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.002975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.003114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.003147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.003268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.003301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.003502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.003533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.003770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.003784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.003984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.004017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.004159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.004191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.004384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.004416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.004604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.004619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.004745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.004760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.004856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.004871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.005022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.005056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.005188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.005219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.005350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.005382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.005575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.005608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.005740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.005771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.006041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.006097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.006347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.006381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.006522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.006555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.006677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.006692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.006840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.006855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.007017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.007051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.007256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.494 [2024-11-29 13:13:08.007288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.494 qpair failed and we were unable to recover it. 00:29:08.494 [2024-11-29 13:13:08.007496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.007535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.007699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.007713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.007905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.007935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.008127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.008160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.008288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.008319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.008431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.008445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.008683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.008715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.008906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.008938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.009174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.009209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.009413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.009428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.009499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.009513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.009607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.009622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.009783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.009823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.010022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.010056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.010193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.010226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.010427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.010441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.010596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.010633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.010885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.010917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.011058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.011090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.011356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.011389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.011577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.011592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.011689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.011704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.011859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.011873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.012055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.012089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.012234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.012267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.012511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.012543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.012727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.012760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.012972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.013011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.013246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.013278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.013495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.013529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.013647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.013662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.013823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.013838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.013969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.014002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.014269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.014300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.014504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.014518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.014757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.014790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.014906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.014938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.015083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.015116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.495 [2024-11-29 13:13:08.015293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.495 [2024-11-29 13:13:08.015336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.495 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.015547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.015562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.015705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.015719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.015824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.015864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.016050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.016084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.016279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.016311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.016502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.016517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.016668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.016701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.016829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.016861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.017112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.017145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.017273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.017306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.017487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.017502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.017583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.017597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.017774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.017788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.017890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.017904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.018060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.018075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.018172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.018187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.018282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.018296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.018437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.018452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.018548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.018580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.018858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.018889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.019092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.019124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.019342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.019374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.019573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.019605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.019792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.019807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.019873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.019888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.019982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.019997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.020160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.020175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.020339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.020353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.020505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.020519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.020694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.020726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.020927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.020970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.021093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.021125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.021267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.021299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.021541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.021573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.021762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.021777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.021967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.022001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.022138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.022170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.022294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.022326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.496 [2024-11-29 13:13:08.022565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.496 [2024-11-29 13:13:08.022580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.496 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.022731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.022764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.022982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.023016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.023215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.023247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.023421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.023455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.023650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.023683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.023818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.023852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.023996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.024030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.024292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.024324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.024570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.024585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.024731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.024746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.024869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.024884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.025035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.025050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.025266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.025297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.025493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.025526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.025717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.025748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.025851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.025883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.026094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.026127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.026250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.026289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.026538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.026571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.026785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.026817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.026999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.027033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.027323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.027338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.027413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.027428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.027518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.027533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.027621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.027635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.027725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.027769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.027946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.027999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.028177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.028209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.028341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.028372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.028557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.028589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.028713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.028731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.028917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.028931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.029031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.029046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.029134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.029148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.029239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.029254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.029394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.029409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.029500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.029514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.029677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.029692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.497 qpair failed and we were unable to recover it. 00:29:08.497 [2024-11-29 13:13:08.029851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.497 [2024-11-29 13:13:08.029871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.029959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.029975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.030051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.030066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.030220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.030254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.030383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.030415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.030511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.030527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.030611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.030634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.030788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.030804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.030966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.030983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.031079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.031093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.031205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.031220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.031306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.031321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.031478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.031494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.031587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.031607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.031720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.031735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.031895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.031910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.032056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.032072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.032169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.032183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.032266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.032306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.032426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.032461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.032684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.032719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.032863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.032895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.033030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.033065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.033209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.033243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.033429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.033461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.033588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.033622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.033802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.033818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.034030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.034045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.034115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.034131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.034324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.034339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.034415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.034461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.034652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.034685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.034805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.034840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.034956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.498 [2024-11-29 13:13:08.034997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.498 qpair failed and we were unable to recover it. 00:29:08.498 [2024-11-29 13:13:08.035225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.035259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.035381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.035413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.035595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.035628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.035758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.035790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.035988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.036022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.036146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.036179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.036346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.036361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.036464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.036479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.036566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.036581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.036685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.036700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.036841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.036856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.036995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.037011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.037148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.037162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.037326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.037358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.037537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.037569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.037693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.037725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.037924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.037977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.038133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.038165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.038347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.038379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.038583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.038615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.038738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.038771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.038957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.038973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.039150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.039164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.039321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.039337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.039553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.039586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.039782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.039814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.039958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.039992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.040316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.040350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.040592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.040626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.040759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.040790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.040931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.040976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.041107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.041139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.041277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.041309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.041510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.041542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.041786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.041823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.042083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.042117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.042243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.042274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.042450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.042465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.042559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.499 [2024-11-29 13:13:08.042573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.499 qpair failed and we were unable to recover it. 00:29:08.499 [2024-11-29 13:13:08.042657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.042671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.042768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.042803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.043024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.043045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.043212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.043228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.043395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.043430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.043615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.043646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.043789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.043822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.043969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.044003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.044204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.044238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.044375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.044411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.044622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.044636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.044792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.044827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.045039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.045074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.045213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.045245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.045387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.045434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.045688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.045705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.045796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.045811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.045920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.045935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.046099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.046114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.046185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.046201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.046339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.046354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.046483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.046498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.046638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.046653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.046750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.046772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.046901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.046935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.047078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.047111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.047232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.047265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.047449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.047482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.047784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.047817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.048044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.048061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.048173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.048189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.048348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.048363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.048442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.048458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.048544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.048558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.048643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.048658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.048798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.048813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.049051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.049067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.049146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.500 [2024-11-29 13:13:08.049161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.500 qpair failed and we were unable to recover it. 00:29:08.500 [2024-11-29 13:13:08.049267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.049282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.049456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.049471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.049686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.049701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.049848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.049866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.050020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.050054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.050167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.050199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.050387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.050420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.050547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.050562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.050717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.050731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.050825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.050859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.051137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.051173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.051449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.051482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.051614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.051647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.051766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.051781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.051893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.051909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.052071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.052105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.052359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.052397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.052688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.052721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.052823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.052854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.053037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.053072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.053210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.053242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.053458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.053491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.053633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.053666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.053774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.053788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.053959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.053975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.054134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.054148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.054243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.054259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.054343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.054358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.054530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.054568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.054749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.054781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.055060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.055095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.055213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.055248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.055433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.055467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.055652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.055685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.055907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.055922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.056090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.056106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.056278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.056310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.056511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.056544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.501 [2024-11-29 13:13:08.056659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.501 [2024-11-29 13:13:08.056692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.501 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.056816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.056830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.056997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.057013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.057187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.057201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.057368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.057382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.057626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.057669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.057916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.057959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.058227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.058259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.058396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.058428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.058547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.058580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.058780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.058795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.059006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.059039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.059180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.059213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.059403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.059435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.059621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.059654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.059797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.059812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.059886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.059900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.060095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.060128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.060261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.060293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.060414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.060447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.060633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.060648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.060719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.060733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.060815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.060857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.060971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.061005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.061185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.061217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.061334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.061366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.061580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.061595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.061703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.061717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.061802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.061816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.061977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.061993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.062068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.062083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.062193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.062207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.062306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.062338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.062530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.062561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.062743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.502 [2024-11-29 13:13:08.062777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.502 qpair failed and we were unable to recover it. 00:29:08.502 [2024-11-29 13:13:08.062973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e08b20 is same with the state(6) to be set 00:29:08.502 [2024-11-29 13:13:08.063145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.063180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.063356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.063371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.063553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.063568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.063645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.063690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.064000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.064035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.064235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.064268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.064474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.064490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.064673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.064705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.064833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.064867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.065114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.065148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.065382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.065414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.065540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.065572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.065788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.065820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.066079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.066111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.066295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.066328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.066471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.066486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.066574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.066589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.066698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.066712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.066787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.066801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.066904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.066941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.067141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.067173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.067353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.067385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.067528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.067542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.067771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.067803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.067991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.068024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.068165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.068197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.068316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.068347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.068465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.068479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.068733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.068764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.068965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.068999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.069136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.069167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.069276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.069306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.069552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.069585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.069759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.069791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.070035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.070050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.070166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.070199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.070467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.070506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.503 [2024-11-29 13:13:08.070619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.503 [2024-11-29 13:13:08.070650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.503 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.070927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.070970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.071219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.071251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.071477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.071492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.071717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.071750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.071945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.071989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.072098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.072130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.072252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.072284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.072414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.072446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.072569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.072602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.072731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.072745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.072852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.072866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.073048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.073081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.073226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.073258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.073438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.073469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.073601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.073632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.073767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.073782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.074006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.074021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.074112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.074126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.074217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.074233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.074401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.074416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.074617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.074649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.074849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.074881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.075010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.075044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.075265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.075298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.075447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.075479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.075696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.075728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.075933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.075987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.076184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.076217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.076416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.076448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.076690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.076722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.076994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.077010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.077249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.077263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.077413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.077427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.077729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.077760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.077939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.077958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.078162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.078195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.078481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.078514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.078718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.504 [2024-11-29 13:13:08.078732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.504 qpair failed and we were unable to recover it. 00:29:08.504 [2024-11-29 13:13:08.078911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.078972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.079244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.079278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.079456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.079488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.079757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.079773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.079919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.079960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.080084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.080117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.080318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.080350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.080616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.080647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.080854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.080897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.081083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.081098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.081189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.081203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.081446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.081461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.081645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.081660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.081850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.081865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.082082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.082115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.082332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.082365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.082512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.082545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.082682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.082725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.082885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.082899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.083049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.083064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.083222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.083236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.083383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.083398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.083484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.083499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.083659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.083673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.083827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.083858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.084152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.084185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.084394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.084426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.084689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.084723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.085034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.085077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.085351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.085386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.085628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.085673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.085848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.085864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.086037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.086054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.086183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.086214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.086415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.086451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.086659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.086692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.086823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.086838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.505 [2024-11-29 13:13:08.087029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.505 [2024-11-29 13:13:08.087045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.505 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.087283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.087299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.087471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.087485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.087711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.087728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.087883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.087898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.088081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.088119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.088396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.088427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.088631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.088662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.088904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.088918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.089022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.089038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.089257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.089287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.089535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.089567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.089700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.089740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.090016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.090031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.090244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.090258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.090469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.090484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.090633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.090648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.090897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.090936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.091152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.091186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.091449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.091480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.091740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.091772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.091965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.091981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.092219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.092251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.092501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.092532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.092781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.092814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.093063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.093095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.093322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.093354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.093533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.093565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.093815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.093830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.093984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.094017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.094288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.094321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.094458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.094491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.094683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.094697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.094927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.094978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.095228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.095262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.095559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.095593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.095816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.095853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.095984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.096020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.096221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.096252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.506 qpair failed and we were unable to recover it. 00:29:08.506 [2024-11-29 13:13:08.096446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.506 [2024-11-29 13:13:08.096480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.096668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.096701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.096992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.097024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.097297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.097329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.097525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.097557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.097743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.097783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.097972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.098006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.098203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.098234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.098508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.098540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.098742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.098758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.098991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.099006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.099167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.099181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.099413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.099446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.099579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.099610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.099859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.099892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.100097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.100114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.100268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.100282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.100522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.100553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.100755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.100789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.101040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.101075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.101281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.101312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.101500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.101517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.101762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.101789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.101964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.101980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.102165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.102181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.102348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.102378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.102592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.102624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.102806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.102839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.103019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.103052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.103339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.103372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.103620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.103637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.103852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.103885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.104127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.104167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.104416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.507 [2024-11-29 13:13:08.104448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.507 qpair failed and we were unable to recover it. 00:29:08.507 [2024-11-29 13:13:08.104579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.104612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.104805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.104837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.105081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.105098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.105315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.105347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.105544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.105558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.105826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.105859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.106131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.106164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.106388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.106420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.106666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.106699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.106954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.106969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.107130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.107163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.107293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.107325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.107551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.107623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.107787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.107804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.107974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.107990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.108148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.108164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.108323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.108354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.108549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.108582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.108829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.108866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.109009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.109025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.109208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.109223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.109457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.109490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.109623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.109654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.109878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.109923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.110124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.110139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.110371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.110413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.110614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.110646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.110846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.110878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.111118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.111134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.111369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.111402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.111672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.111703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.111974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.111989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.112145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.112160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.112340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.112372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.112617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.112649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.112788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.112821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.112960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.112975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.113235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.113249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.508 [2024-11-29 13:13:08.113412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.508 [2024-11-29 13:13:08.113426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.508 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.113590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.113604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.113813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.113859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.114053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.114089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.114271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.114302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.114588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.114620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.114877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.114910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.115137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.115171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.115449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.115482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.115704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.115737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.115964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.115998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.116270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.116303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.116554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.116588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.116846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.116878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.117177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.117212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.117477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.117510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.117619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.117633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.117777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.117792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.118028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.118043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.118142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.118157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.118241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.118255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.118437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.118452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.118687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.118703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.118847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.118862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.119092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.119108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.119370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.119401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.119530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.119562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.119758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.119797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.119925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.119967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.120102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.120117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.120193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.120207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.120416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.120431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.120662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.120678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.120843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.120875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.121100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.121133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.121327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.121360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.121557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.121590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.121788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.121803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.121975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.509 [2024-11-29 13:13:08.121991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.509 qpair failed and we were unable to recover it. 00:29:08.509 [2024-11-29 13:13:08.122181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.122221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.122411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.122443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.122709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.122742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.122922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.122937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.123157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.123191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.123465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.123497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.123685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.123701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.123846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.123861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.124093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.124108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.124194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.124209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.124449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.124465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.124692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.124707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.124813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.124827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.124975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.124991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.125227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.125241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.125467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.125482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.125631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.125645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.125817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.125849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.126062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.126097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.126369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.126401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.126675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.126708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.126998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.127037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.127223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.127238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.127345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.127359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.127542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.127556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.127647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.127663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.127820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.127851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.127987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.128021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.128308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.128347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.128532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.128565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.128673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.128688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.128943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.129002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.129278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.129311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.129508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.129541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.129810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.129842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.130037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.130052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.130172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.130189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.130428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.130442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.510 [2024-11-29 13:13:08.130694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.510 [2024-11-29 13:13:08.130726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.510 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.130964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.130998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.131222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.131255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.131436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.131468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.131678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.131711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.131982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.132017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.132271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.132302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.132509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.132543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.132828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.132842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.133058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.133075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.133160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.133176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.133339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.133354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.133562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.133577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.133818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.133835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.133926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.133942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.134092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.134107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.134183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.134198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.134432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.134507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.134730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.134772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.134899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.134932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.135227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.135244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.135385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.135400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.135640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.135658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.135907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.135942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.136222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.136262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.136399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.136433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.136555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.136591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.136846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.136888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.137051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.137067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.137258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.137294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.137545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.137581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.137883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.137901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.138063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.138080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.138267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.138303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.138506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.138539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.138750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.138766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.138849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.138864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.511 [2024-11-29 13:13:08.139043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.511 [2024-11-29 13:13:08.139061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.511 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.139172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.139203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.139399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.139431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.139573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.139607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.139736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.139751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.139837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.139851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.139998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.140015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.140227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.140251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.140417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.140434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.140558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.140600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.140814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.140849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.141102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.141140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.141353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.141387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.141507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.141541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.141748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.141784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.141913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.141959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.142224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.142240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.142378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.142393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.142502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.142541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.142722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.142755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.142957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.142995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.143193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.143229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.143350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.143385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.143656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.143690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.143837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.143871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.144028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.144070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.144332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.144368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.144622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.144657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.144795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.144834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.145095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.145115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.145336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.145373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.145573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.145610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.145763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.145779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.145936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.145958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.146137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.146180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.146331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.512 [2024-11-29 13:13:08.146372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.512 qpair failed and we were unable to recover it. 00:29:08.512 [2024-11-29 13:13:08.146564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.146601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.146859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.146897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.147098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.147116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.147389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.147429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.147691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.147728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.148002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.148019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.148183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.148201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.148423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.148441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.148592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.148608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.148826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.148865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.149075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.149115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.149345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.149381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.149515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.149548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.149705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.149740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.149930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.149955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.150106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.150140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.150358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.150392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.150589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.150622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.150766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.150782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.150940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.150963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.151121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.151138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.151323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.151355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.151490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.151523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.151749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.151781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.151920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.151934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.152139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.152173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.152314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.152348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.152535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.152570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.152713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.152759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.152851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.152866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.152973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.152989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.153217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.153233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.153378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.153393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.153587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.153621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.153821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.153859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.154083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.154119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.154322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.154355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.154543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.513 [2024-11-29 13:13:08.154584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.513 qpair failed and we were unable to recover it. 00:29:08.513 [2024-11-29 13:13:08.154791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.154825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.155032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.155069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.155313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.155329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.155566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.155586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.155733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.155748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.155925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.155970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.156156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.156193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.156416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.156455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.156574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.156621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.156808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.156823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.156916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.156932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.157101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.157117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.157264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.157282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.157437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.157453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.157617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.157633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.157791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.157813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.158005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.158021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.158190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.158207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.158430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.158447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.158669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.158685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.158861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.158878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.159044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.159083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.159266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.159302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.159453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.159507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.159707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.159740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.159966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.160000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.160267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.160282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.160437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.160454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.160701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.160743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.160897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.160935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.161258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.161294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.161422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.161455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.161665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.161700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.161905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.161941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.162222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.162240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.162327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.162342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.162486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.162502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.162685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.162703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.514 [2024-11-29 13:13:08.162961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.514 [2024-11-29 13:13:08.163000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.514 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.163284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.163318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.163552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.163588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.163785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.163802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.163996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.164030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.164167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.164199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.164389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.164426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.164620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.164652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.164921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.164982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.165169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.165186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.165325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.165340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.165498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.165518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.165680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.165696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.165913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.165932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.166027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.166043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.166285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.166303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.166390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.166405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.166507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.166528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.166677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.166693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.166852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.166885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.167024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.167057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.167242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.167275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.167411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.167443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.167571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.167602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.167787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.167820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.167968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.168005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.168181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.168197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.168376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.168392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.168647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.168684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.168824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.168855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.169063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.169097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.169281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.169296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.169377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.169392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.169551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.169567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.169743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.169778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.169920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.169962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.170144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.170177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.170304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.170341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.170560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.170595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.515 [2024-11-29 13:13:08.170798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.515 [2024-11-29 13:13:08.170814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.515 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.171016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.171089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.171306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.171343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.171682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.171727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.171901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.171917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.172149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.172170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.172278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.172293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.172436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.172450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.172529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.172545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.172703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.172719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.172886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.172900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.173076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.173110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.173381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.173413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.173670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.173701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.173846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.173878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.174001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.174035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.174315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.174330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.174433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.174447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.174520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.174535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.174601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.174615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.174755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.174769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.174953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.174969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.175153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.175168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.175399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.175430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.175555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.175586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.175712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.175744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.175923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.175976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.176109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.176141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.176277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.176308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.176494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.176526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.176796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.176811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.176925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.176968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.177314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.177384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.177667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.177704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.516 qpair failed and we were unable to recover it. 00:29:08.516 [2024-11-29 13:13:08.177962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.516 [2024-11-29 13:13:08.177973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.178124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.178134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.178299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.178331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.178596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.178627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.178918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.178958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.179179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.179211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.179503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.179535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.179807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.179838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.180053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.180064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.180211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.180222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.180437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.180468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.180682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.180725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.180991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.181024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.181307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.181338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.181551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.181583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.181770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.181801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.181996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.182028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.182243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.182276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.182491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.182523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.182647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.182678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.182874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.182911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.183131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.183142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.183350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.183360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.183470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.183501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.183768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.183799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.183942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.183982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.184230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.184241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.184469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.184500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.184771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.184801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.184995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.185006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.185170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.185180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.185396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.185406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.185619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.185650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.185925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.185968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.186170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.186180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.186322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.186332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.186523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.186555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.186842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.517 [2024-11-29 13:13:08.186872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.517 qpair failed and we were unable to recover it. 00:29:08.517 [2024-11-29 13:13:08.187055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.187066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.187244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.187276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.187538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.187570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.187859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.187890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.188170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.188202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.188492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.188524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.188669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.188701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.188984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.188994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.189152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.189162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.189403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.189435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.189621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.189653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.189891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.189901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.190054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.190065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.190291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.190328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.190532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.190563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.190696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.190728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.190997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.191008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.191268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.191279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.191413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.191423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.191590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.191621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.191813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.191845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.192062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.192097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.192321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.192331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.192493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.192503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.192595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.192605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.192804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.192814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.193054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.193065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.196201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.196237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.196530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.196561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.196837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.196869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.197121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.197131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.197293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.197324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.197502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.197534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.197787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.197819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.198012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.198044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.198233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.198243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.198423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.198453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.198770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.198801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.199056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.199089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.199350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.518 [2024-11-29 13:13:08.199382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.518 qpair failed and we were unable to recover it. 00:29:08.518 [2024-11-29 13:13:08.199677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.199709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.199963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.199997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.200285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.200296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.200566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.200590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.200860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.200892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.201194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.201234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.201477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.201488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.201698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.201708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.201775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.201785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.201921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.201931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.202185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.202219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.202455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.202487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.202763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.202795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.203057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.203095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.203347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.203357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.203531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.203562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.203830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.203861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.204130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.204141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.204347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.204379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.204574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.204606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.204896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.204928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.205208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.205240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.205493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.205525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.205723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.205755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.205970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.206004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.206249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.206259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.206487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.206519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.206701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.206732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.206980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.207014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.207290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.207323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.207470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.207502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.207694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.207726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.207998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.208032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.208322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.208354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.208571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.208602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.208799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.208831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.209049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.209083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.209273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.209304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.209570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.209602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.209869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.209901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.210124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.210158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.519 [2024-11-29 13:13:08.210403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.519 [2024-11-29 13:13:08.210414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.519 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.210582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.210592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.210825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.210857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.211149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.211183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.211392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.211424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.211656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.211688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.211965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.211999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.212239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.212250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.212482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.212514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.212774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.212807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.212994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.213005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.213212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.213243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.213459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.213497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.213677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.213709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.213974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.214008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.214279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.214314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.214603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.214634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.214909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.214942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.215159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.215191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.215384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.215394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.215559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.215591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.215892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.215924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.216181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.216213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.216513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.216545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.216747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.216779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.217052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.217087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.217273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.217283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.217551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.217583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.217787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.217818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.217998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.218009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.218175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.218185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.218359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.218369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.218598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.218629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.218925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.218967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.219165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.219175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.219422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.219432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.219621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.520 [2024-11-29 13:13:08.219654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.520 qpair failed and we were unable to recover it. 00:29:08.520 [2024-11-29 13:13:08.219927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.219968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.220259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.220291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.220559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.220597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.220852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.220884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.221152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.221163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.221389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.221400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.221602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.221612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.221813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.221824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.221978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.221990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.222235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.222268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.222565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.222597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.222789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.222822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.223009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.223021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.223251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.223283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.223428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.223461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.223750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.223781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.224054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.224066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.224316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.224327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.224552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.224562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.224798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.224808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.224968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.224979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.225191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.225223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.225490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.225522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.225820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.225852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.226070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.226104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.226218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.226250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.226435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.226467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.226736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.226768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.226940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.226956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.227141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.227174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.227418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.227450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.227712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.227743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.227941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.227984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.228167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.228199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.228358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.228368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.228547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.228579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.228782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.228814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.228997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.229030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.229218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.229250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.229560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.229592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.229851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.229883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.521 [2024-11-29 13:13:08.230173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.521 [2024-11-29 13:13:08.230207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.521 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.230483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.230526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.230742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.230774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.230966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.231001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.231185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.231195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.231367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.231377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.231582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.231592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.231815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.231825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.232052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.232064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.232237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.232269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.232536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.232568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.232770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.232801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.233002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.233036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.233307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.233339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.233616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.233647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.233880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.233912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.234159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.234192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.234459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.234469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.234620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.234630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.234767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.234777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.234984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.235018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.235138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.235169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.235382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.235422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.235596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.235606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.235820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.235830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.236067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.236078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.236313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.236345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.236575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.236608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.236888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.236920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.237182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.237217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.237488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.237521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.237845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.237876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.238165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.238198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.238419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.238452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.238678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.238710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.238900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.238932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.239240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.239273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.239546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.239577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.239758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.239790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.240002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.240037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.240242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.240253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.522 [2024-11-29 13:13:08.240480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.522 [2024-11-29 13:13:08.240494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.522 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.240723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.240755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.240935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.240994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.241268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.241301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.241567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.241599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.241900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.241931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.242205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.242238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.242494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.242526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.242801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.242832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.243041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.243074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.243298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.243309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.243563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.243595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.243844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.243876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.244146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.244180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.244464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.244475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.244701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.244711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.244891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.244902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.245209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.245242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.245523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.245555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.245838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.245870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.246148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.246182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.246464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.246474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.246706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.246717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.246972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.246983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.247189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.247199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.247351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.247362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.247517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.247548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.247826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.247859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.248153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.248180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.248331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.248342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.248512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.248544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.248766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.248800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.249076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.249109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.249360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.249391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.249674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.249707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.249888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.249919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.250191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.250264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.250522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.250539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.250808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.250843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.251101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.251135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.251441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.251483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.251684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.251717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.252001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.252035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.252318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.523 [2024-11-29 13:13:08.252334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.523 qpair failed and we were unable to recover it. 00:29:08.523 [2024-11-29 13:13:08.252560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.252575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.252822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.252853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.253088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.253122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.253392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.253406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.253570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.253585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.253747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.253762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.253869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.253883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.254160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.254175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.254296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.254312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.254595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.254627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.254913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.254945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.255226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.255241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.255406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.255421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.255663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.255694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.255996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.256011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.256238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.256253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.256483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.256497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.256713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.256728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.256896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.256911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.257156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.257189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.257444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.257477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.257756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.257788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.258061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.258076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.258241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.258256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.258419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.258434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.258636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.258668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.258963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.258997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.259201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.259233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.259413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.259445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.259676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.259708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.259978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.260011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.260223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.260254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.260511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.260526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.260764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.260779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.261020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.261036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.261281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.261296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.261399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.261416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.261645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.261677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.261825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.261857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.262119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.262166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.262381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.262395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.524 [2024-11-29 13:13:08.262662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.524 [2024-11-29 13:13:08.262677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.524 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.262920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.262935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.263175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.263190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.263461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.263476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.263641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.263656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.263848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.263880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.264034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.264067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.264278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.264310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.264598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.264629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.264968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.265003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.265281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.265313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.265588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.265621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.265826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.265858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.266126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.266142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.266361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.266375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.266543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.266559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.266753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.266768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.266888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.266903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.267132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.267148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.267333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.267348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.267575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.267590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.267754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.267770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.267931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.267973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.268227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.268259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.268520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.268552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.268858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.268890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.269162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.269177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.269397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.269413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.269627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.269642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.269888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.269903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.270066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.270083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.270242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.270273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.270400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.270432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.270709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.270741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.270932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.270952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.271195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.271233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.271517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.271550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.271731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.271763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.272040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.272073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.272366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.272398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.272620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.272652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.272905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.272937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.273251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.273284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.525 qpair failed and we were unable to recover it. 00:29:08.525 [2024-11-29 13:13:08.273535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.525 [2024-11-29 13:13:08.273568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.273847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.273879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.274157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.274172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.274322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.274336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.274584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.274599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.274868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.274900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.275114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.275148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.275411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.275444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.275649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.275681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.275939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.275992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.276180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.276212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.276489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.276521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.276799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.276831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.277128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.277172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.277393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.277408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.277583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.277597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.277770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.277802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.278089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.278123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.278406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.278437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.278694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.278726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.278912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.278928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.279180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.279213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.279434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.279465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.279612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.279643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.279941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.279983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.280265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.280297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.280529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.280561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.280759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.280791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.281046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.281080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.281395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.281427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.281737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.281769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.282030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.282064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.282340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.282379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.282594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.282626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.282889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.282921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.283144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.283178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.283443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.283458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.283697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.283712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.283874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.283889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.284050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.284066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.284141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.284155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.284245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.284260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.284496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.284511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.284770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.284807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.285088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.285123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.285406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.285420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.285675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.526 [2024-11-29 13:13:08.285690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.526 qpair failed and we were unable to recover it. 00:29:08.526 [2024-11-29 13:13:08.285860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.527 [2024-11-29 13:13:08.285874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.527 qpair failed and we were unable to recover it. 00:29:08.527 [2024-11-29 13:13:08.286041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.527 [2024-11-29 13:13:08.286057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.527 qpair failed and we were unable to recover it. 00:29:08.527 [2024-11-29 13:13:08.286302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.527 [2024-11-29 13:13:08.286316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.527 qpair failed and we were unable to recover it. 00:29:08.527 [2024-11-29 13:13:08.286595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.527 [2024-11-29 13:13:08.286610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.527 qpair failed and we were unable to recover it. 00:29:08.527 [2024-11-29 13:13:08.286910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.527 [2024-11-29 13:13:08.286942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.527 qpair failed and we were unable to recover it. 00:29:08.527 [2024-11-29 13:13:08.287180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.527 [2024-11-29 13:13:08.287213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.527 qpair failed and we were unable to recover it. 00:29:08.527 [2024-11-29 13:13:08.287400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.527 [2024-11-29 13:13:08.287433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.527 qpair failed and we were unable to recover it. 00:29:08.527 [2024-11-29 13:13:08.287628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.527 [2024-11-29 13:13:08.287660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.527 qpair failed and we were unable to recover it. 00:29:08.527 [2024-11-29 13:13:08.287853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.527 [2024-11-29 13:13:08.287884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.527 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.288178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.288213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.288503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.288518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.288695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.288710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.288959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.288975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.289131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.289146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.289380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.289395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.289635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.289650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.289876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.289891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.290074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.290089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.290256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.290271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.290510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.290525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.290736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.290751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.290933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.290952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.291148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.291181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.291445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.291477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.291760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.291792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.292082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.292121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.292394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.292409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.292648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.292663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.292902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.292917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.293166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.293182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.293418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.293433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.293595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.808 [2024-11-29 13:13:08.293609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.808 qpair failed and we were unable to recover it. 00:29:08.808 [2024-11-29 13:13:08.293796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.293810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.294041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.294075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.294372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.294404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.294618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.294649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.294905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.294938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.295187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.295202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.295440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.295455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.295691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.295707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.295890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.295905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.296072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.296087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.296329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.296361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.296564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.296596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.296860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.296892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.297165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.297199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.297392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.297423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.297637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.297669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.297870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.297902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.298184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.298200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.298419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.298451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.298654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.298686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.298971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.299005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.299191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.299206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.299471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.299485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.299577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.299591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.299812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.299826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.300068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.300102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.300302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.300334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.300590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.300622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.300918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.300974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.301235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.301271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.301496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.301511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.301770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.301786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.301957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.301972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.302145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.302183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.302383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.302415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.302692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.302724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.302920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.302962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.303249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.809 [2024-11-29 13:13:08.303281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.809 qpair failed and we were unable to recover it. 00:29:08.809 [2024-11-29 13:13:08.303564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.303579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.303745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.303760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.303939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.303982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.304180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.304212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.304402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.304442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.304707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.304722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.304878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.304893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.305133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.305149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.305399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.305433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.305715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.305747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.306003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.306037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.306285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.306318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.306566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.306582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.306798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.306813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.307027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.307045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.307219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.307251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.307568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.307599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.307878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.307911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.308203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.308237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.308453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.308486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.308746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.308762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.308999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.309016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.309239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.309254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.309368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.309384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.309535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.309550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.309650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.309665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.309911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.309943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.310281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.310296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.310485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.310500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.310773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.310805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.311066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.311100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.311248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.311262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.311436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.311479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.311754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.311788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.312006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.312039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.312342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.312360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.312459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.312474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.312739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.312772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.810 [2024-11-29 13:13:08.313054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.810 [2024-11-29 13:13:08.313088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.810 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.313362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.313377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.313536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.313552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.313723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.313756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.313985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.314020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.314225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.314256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.314509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.314526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.314765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.314780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.315021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.315038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.315237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.315252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.315492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.315525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.315813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.315846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.316125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.316142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.316269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.316303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.316575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.316607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.316809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.316841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.317048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.317082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.317361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.317409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.317627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.317643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.317876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.317892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.318069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.318098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.318293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.318326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.318625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.318658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.318848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.318880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.319187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.319222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.319424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.319441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.319611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.319627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.319859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.319876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.320109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.320126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.320384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.320400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.320552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.320567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.320853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.320884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.321165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.321200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.321423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.321457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.321720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.321735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.321888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.321904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.322165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.322201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.322477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.322516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.322802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.811 [2024-11-29 13:13:08.322835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.811 qpair failed and we were unable to recover it. 00:29:08.811 [2024-11-29 13:13:08.323033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.323067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.323200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.323216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.323441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.323475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.323792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.323825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.324089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.324125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.324419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.324451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.324729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.324762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.325052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.325087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.325365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.325398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.325711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.325743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.326010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.326052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.326299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.326314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.326539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.326554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.326678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.326694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.326940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.326985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.327182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.327216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.327493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.327525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.327725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.327759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.328039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.328074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.328271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.328286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.328482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.328514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.328710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.328742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.329049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.329083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.329364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.329381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.329544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.329559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.329769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.329845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.330197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.330241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.330449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.330465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.330636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.330652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.330820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.330852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.331119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.331156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.331382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.812 [2024-11-29 13:13:08.331396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.812 qpair failed and we were unable to recover it. 00:29:08.812 [2024-11-29 13:13:08.331623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.331657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.331914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.331965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.332092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.332107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.332371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.332403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.332599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.332631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.332836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.332869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.333154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.333188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.333456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.333489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.333687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.333701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.333989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.334023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.334220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.334252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.334529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.334544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.334752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.334768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.334938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.334961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.335126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.335142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.335332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.335347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.335585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.335620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.335826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.335858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.336141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.336174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.336431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.336447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.336664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.336685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.336939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.336960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.337059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.337079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.337163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.337179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.337344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.337359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.337466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.337482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.337649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.337665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.337945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.338004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.338225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.338258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.338463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.338498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.338833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.338879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.339081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.339117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.339249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.339283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.339492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.339507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.339738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.339770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.340023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.340059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.340257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.340297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.340459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.340475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.813 [2024-11-29 13:13:08.340710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.813 [2024-11-29 13:13:08.340726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.813 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.341023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.341058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.341262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.341278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.341469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.341485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.341721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.341739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.341915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.341932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.342230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.342245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.342407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.342423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.342614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.342646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.342845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.342885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.343102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.343136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.343339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.343372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.343589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.343626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.343817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.343832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.344070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.344096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.344255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.344270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.344492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.344526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.344833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.344865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.345077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.345112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.345316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.345349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.345608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.345623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.345847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.345863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.346084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.346102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.346282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.346299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.346520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.346552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.346853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.346885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.347185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.347202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.347328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.347343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.347618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.347662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.347870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.347903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.348189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.348235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.348528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.348545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.348707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.348724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.348837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.348852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.349087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.349106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.349330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.349345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.349463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.349504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.349796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.349827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.350114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.350150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.814 qpair failed and we were unable to recover it. 00:29:08.814 [2024-11-29 13:13:08.350341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.814 [2024-11-29 13:13:08.350359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.350528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.350562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.350681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.350713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.350974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.351007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.351195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.351228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.351484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.351517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.351784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.351798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.351924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.351939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.352064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.352079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.352176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.352190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.352423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.352457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.352704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.352780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.352934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.352992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.353198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.353215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.353409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.353444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.353664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.353697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.353831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.353865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.354154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.354188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.354317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.354351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.354489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.354526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.354616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.354631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.354852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.354883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.355183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.355218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.355349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.355383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.355561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.355581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.355815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.355831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.356001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.356034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.356175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.356208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.356488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.356521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.356705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.356719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.356937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.356982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.357238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.357272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.357485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.357500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.357725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.357758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.357971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.358006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.358293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.358326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.358525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.358560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.358817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.358851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.359119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.815 [2024-11-29 13:13:08.359153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.815 qpair failed and we were unable to recover it. 00:29:08.815 [2024-11-29 13:13:08.359453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.359485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.359798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.359833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.360102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.360148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.360253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.360268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.360383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.360398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.360519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.360534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.360636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.360651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.360759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.360777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.361009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.361044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.361185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.361219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.361445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.361479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.361683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.361698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.361894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.361909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.362084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.362100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.362242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.362257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.362452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.362468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.362649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.362664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.362905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.362920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.363110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.363126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.363362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.363377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.363604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.363637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.363777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.363811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.364041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.364075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.364328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.364345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.364507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.364524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.364816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.364861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.365085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.365119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.365282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.365298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.365450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.365465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.365616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.365631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.365878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.365911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.366134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.366169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.366362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.366378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.366585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.366600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.366784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.366818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.367163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.367199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.367472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.367487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.367826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.367859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.816 qpair failed and we were unable to recover it. 00:29:08.816 [2024-11-29 13:13:08.368077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.816 [2024-11-29 13:13:08.368112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.368257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.368291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.368492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.368508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.368747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.368763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.368920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.368934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.369043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.369059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.369256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.369271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.369435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.369450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.369650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.369685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.369967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.370000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.370275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.370291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.370490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.370523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.370770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.370803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.371087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.371121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.371264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.371298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.371504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.371520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.371610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.371625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.371855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.371871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.372079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.372095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.372199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.372214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.372352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.372369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.372540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.372555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.372765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.372798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.373016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.373050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.373255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.373271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.373445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.373479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.373684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.373717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.373969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.374011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.374161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.374194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.374488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.374520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.374756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.374771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.374919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.374935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.375194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.375211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.817 [2024-11-29 13:13:08.375306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.817 [2024-11-29 13:13:08.375321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.817 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.375432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.375447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.375608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.375623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.375720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.375735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.375920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.375997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.376209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.376242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.376437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.376471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.376785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.376801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.376970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.376988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.377084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.377130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.377384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.377417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.377697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.377731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.378024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.378058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.378267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.378283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.378555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.378590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.378794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.378827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.379087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.379121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.379271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.379305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.379506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.379551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.379701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.379716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.379960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.379996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.380157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.380173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.380280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.380298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.380406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.380423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.380541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.380557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.380743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.380776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.380981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.381017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.381275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.381322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.381428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.381443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.381649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.381684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.381881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.381915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.382091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.382125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.382417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.382434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.382545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.382560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.382782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.382820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.383108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.383144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.383393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.383409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.383695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.383712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.383880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.818 [2024-11-29 13:13:08.383897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.818 qpair failed and we were unable to recover it. 00:29:08.818 [2024-11-29 13:13:08.384117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.384133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.384303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.384336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.385653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.385687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.385857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.385873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.386122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.386139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.386335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.386352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.386518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.386551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.386771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.386804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.386984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.387018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.387270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.387303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.387505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.387538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.387816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.387832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.388008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.388024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.388171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.388186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.388371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.388404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.388772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.388806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.389060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.389095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.389255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.389288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.389498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.389531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.389717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.389733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.389916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.389932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.390103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.390119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.390233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.390249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.390353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.390368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.390581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.390615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.390870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.390904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.391182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.391217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.391413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.391448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.391713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.391729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.391981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.391999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.392133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.392149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.392329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.392345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.392464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.392480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.392679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.392710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.392989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.393023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.393180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.393213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.393421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.393437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.393631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.393647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.819 [2024-11-29 13:13:08.393793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.819 [2024-11-29 13:13:08.393808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.819 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.394029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.394045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.394165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.394181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.394398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.394433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.394594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.394626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.394912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.394946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.395106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.395140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.395397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.395429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.395649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.395683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.395973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.396008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.396213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.396245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.396394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.396427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.396651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.396668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.396816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.396855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.397061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.397097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.397292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.397326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.397477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.397494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.397702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.397734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.397992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.398029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.398181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.398216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.398402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.398417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.398643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.398677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.398958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.398993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.399213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.399245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.399399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.399438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.399685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.399700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.399865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.399880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.399990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.400009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.400240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.400255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.400364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.400380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.400468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.400483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.400751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.400766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.400843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.400857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.401079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.401157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.401489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.401564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.401698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.401717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.401892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.401925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.402240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.402275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.402536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.820 [2024-11-29 13:13:08.402569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.820 qpair failed and we were unable to recover it. 00:29:08.820 [2024-11-29 13:13:08.402793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.402809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.403074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.403089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.403211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.403226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.403421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.403437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.403621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.403652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.403859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.403893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.404118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.404152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.404312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.404327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.404548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.404564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.404722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.404754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.405010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.405043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.405263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.405295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.405453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.405486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.405692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.405724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.405942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.405987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.406220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.406237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.406476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.406512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.406724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.406758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.406958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.406993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.407149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.407182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.407480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.407511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.407722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.407738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.407909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.407924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.408119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.408135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.408287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.408322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.408596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.408635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.408850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.408885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.409092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.409125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.409355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.409388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.409552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.409567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.409804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.409820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.410097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.410131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.410276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.410292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.410483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.410515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.821 qpair failed and we were unable to recover it. 00:29:08.821 [2024-11-29 13:13:08.410746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.821 [2024-11-29 13:13:08.410779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.411084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.411118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.411287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.411321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.411578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.411611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.411872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.411887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.412097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.412114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.412208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.412223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.412346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.412361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.412460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.412477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.412643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.412681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.412966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.413001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.413288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.413322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.413452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.413485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.413697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.413731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.414039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.414074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.414299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.414332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.414547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.414580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.414848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.414865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.415050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.415069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.415174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.415189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.415387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.415403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.415562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.415577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.415748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.415764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.415974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.415991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.416116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.416132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.416256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.416273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.416436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.416456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.416648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.416664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.416830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.416847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.417028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.417044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.417159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.417176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.417293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.417313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.417594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.417611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.417776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.417792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.417878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.417895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.418091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.418108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.418227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.822 [2024-11-29 13:13:08.418242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.822 qpair failed and we were unable to recover it. 00:29:08.822 [2024-11-29 13:13:08.418417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.418433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.418605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.418622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.418847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.418864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.419028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.419044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.419158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.419173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.419350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.419365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.419479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.419494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.419671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.419687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.419850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.419867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.420063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.420080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.420253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.420270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.420453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.420470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.420549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.420565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.420836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.420853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.421097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.421113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.421282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.421298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.421406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.421422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.421520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.421538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.421724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.421739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.421932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.421961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.422249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.422264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.422460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.422476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.422693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.422708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.422878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.422895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.423078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.423096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.423331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.423349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.423544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.423561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.423758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.423775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.423971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.423987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.424106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.424124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.424280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.424296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.424403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.424418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.424537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.424553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.424733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.424748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.424964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.424984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.425179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.425212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.425475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.425510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.425669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.425713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.823 [2024-11-29 13:13:08.425939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.823 [2024-11-29 13:13:08.425961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.823 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.426084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.426100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.426201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.426217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.426498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.426514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.426748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.426764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.426924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.426940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.427098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.427115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.427341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.427357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.427530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.427545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.427788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.427805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.428110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.428129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.428297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.428313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.428513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.428548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.428779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.428813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.429040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.429073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.429274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.429310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.429525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.429567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.430590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.430626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.430888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.430905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.431057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.431097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.431290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.431324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.431525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.431558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.431826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.431842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.431979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.432014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.432270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.432303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.432634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.432671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.432835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.432867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.433084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.433118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.433367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.433383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.433583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.433617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.433753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.433786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.434004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.434040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.434331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.434364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.434494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.434510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.434692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.434709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.434978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.435013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.435228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.435267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.435538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.435573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.435866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.824 [2024-11-29 13:13:08.435898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.824 qpair failed and we were unable to recover it. 00:29:08.824 [2024-11-29 13:13:08.436102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.436137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.436307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.436340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.436506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.436543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.436819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.436835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.436993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.437029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.437242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.437277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.437484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.437500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.437698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.437736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.437922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.437983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.438229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.438265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.438483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.438517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.438742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.438775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.438982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.438998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.439117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.439131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.439368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.439383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.439553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.439571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.439883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.439906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.440084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.440102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.440321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.440337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.440492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.440508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.440741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.440757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.441002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.441019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.441217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.441233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.441390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.441406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.441728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.441744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.441987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.442004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.442178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.442194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.442288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.442303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.442520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.442536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.442807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.442824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.442994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.443010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.443179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.443196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.443375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.443391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.443478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.443493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.443662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.443679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.443836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.443852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.443959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.443975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.444084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.444102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.825 qpair failed and we were unable to recover it. 00:29:08.825 [2024-11-29 13:13:08.444195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.825 [2024-11-29 13:13:08.444211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.444313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.444328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.444429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.444445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.444543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.444559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.444732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.444748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.444837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.444853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.445007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.445024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.445141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.445157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.445336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.445353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.445522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.445538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.445759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.445776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.445875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.445892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.446064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.446080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.446192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.446207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.446395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.446410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.446578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.446594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.446706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.446722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.446817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.446833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.446938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.446961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.447058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.447073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.447170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.447186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.447353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.447369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.447522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.447538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.447725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.447741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.447844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.447860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.447945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.447968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.448139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.448154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.448232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.448247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.448402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.448418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.448510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.448526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.448625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.448641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.448757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.448774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.448875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.448891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.448980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.826 [2024-11-29 13:13:08.448996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.826 qpair failed and we were unable to recover it. 00:29:08.826 [2024-11-29 13:13:08.449161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.449176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.449278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.449294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.449408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.449423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.449527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.449541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.449655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.449670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.449771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.449791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.449879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.449894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.449992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.450009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.450114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.450129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.450293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.450308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.450388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.450404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.450499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.450515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.450600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.450615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.450694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.450709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.450956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.450973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.451091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.451124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.451242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.451275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.451490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.451524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.451748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.451764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.451963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.451998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.452128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.452163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.452279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.452313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.452439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.452472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.452570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.452586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.452666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.452682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.452767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.452781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.452882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.452898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.453054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.453071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.453292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.453328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.453534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.453568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.453774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.453790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.453881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.453897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.453997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.454014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.454094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.454110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.454203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.454218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.454334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.454349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.454442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.454458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.454547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.827 [2024-11-29 13:13:08.454562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.827 qpair failed and we were unable to recover it. 00:29:08.827 [2024-11-29 13:13:08.454667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.454682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.454847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.454864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.455135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.455152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.455306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.455321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.455502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.455518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.455671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.455686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.455848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.455864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.456098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.456117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.456214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.456230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.456463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.456479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.456586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.456603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.456771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.456787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.456961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.456977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.457077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.457094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.457191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.457206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.457338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.457354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.457452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.457468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.457626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.457641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.457804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.457820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.458012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.458028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.458191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.458207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.458471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.458487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.458672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.458687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.458776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.458791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.458961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.458978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.459224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.459239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.459330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.459346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.459510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.459527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.459622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.459637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.459803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.459818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.459973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.459991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.460211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.460227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.460334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.460349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.460434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.460450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.460625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.460659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.460833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.460850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.460945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.460968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.461058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.461073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.828 qpair failed and we were unable to recover it. 00:29:08.828 [2024-11-29 13:13:08.461189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.828 [2024-11-29 13:13:08.461205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.461372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.461386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.461466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.461482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.461648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.461663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.461770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.461784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.461863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.461878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.461974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.461990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.462074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.462089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.462251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.462265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.462351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.462370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.462593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.462609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.462756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.462770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.462942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.462962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.463054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.463069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.463174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.463188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.463375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.463391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.463477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.463492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.463649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.463663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.463772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.463787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.463877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.463892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.464064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.464079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.464165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.464179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.464328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.464344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.464442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.464457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.464668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.464683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.464833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.464848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.465001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.465016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.465162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.465177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.465271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.465285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.465386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.465402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.465494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.465509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.465599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.465614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.465770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.465785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.465931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.465954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.466047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.466062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.466228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.466243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.466379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.466419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.466628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.466645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.466738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.829 [2024-11-29 13:13:08.466753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.829 qpair failed and we were unable to recover it. 00:29:08.829 [2024-11-29 13:13:08.466853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.466870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.467087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.467104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.467211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.467226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.467308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.467324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.467411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.467427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.467576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.467591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.467843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.467859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.467967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.467985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.468136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.468150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.468309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.468324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.468405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.468420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.468674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.468690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.468857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.468874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.469039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.469056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.469271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.469288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.469395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.469410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.469502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.469517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.469629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.469644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.469799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.469813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.469901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.469916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.470084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.470103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.470189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.470205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.470366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.470382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.470469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.470485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.470634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.470657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.470834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.470850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.471004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.471019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.471106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.471122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.471271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.471287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.471374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.471389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.471466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.471485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.471644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.471659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.471814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.471832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.472004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.472021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.472189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.472205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.472293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.472307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.472384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.472400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.830 qpair failed and we were unable to recover it. 00:29:08.830 [2024-11-29 13:13:08.472490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.830 [2024-11-29 13:13:08.472505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.472719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.472734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.472823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.472838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.472934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.472953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.473034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.473049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.473144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.473159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.473331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.473345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.473498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.473512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.473733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.473748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.473895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.473910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.474093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.474108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.474294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.474311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.474476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.474493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.474649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.474664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.474738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.474763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.474910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.474926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.475079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.475094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.475174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.475190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.475408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.475423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.475515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.475529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.475688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.475702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.475784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.475800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.475911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.475927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.476103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.476118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.476279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.476295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.476389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.476405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.476494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.476513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.476675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.476690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.476839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.476854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.476959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.476974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.477065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.477080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.477183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.477198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.477411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.477426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.477512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.477527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.477621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.477635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.831 [2024-11-29 13:13:08.477779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.831 [2024-11-29 13:13:08.477795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.831 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.477942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.477965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.478060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.478074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.478330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.478344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.478515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.478531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.478692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.478707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.478932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.478960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.479064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.479079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.479266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.479281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.479377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.479392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.479582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.479598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.479809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.479825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.479957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.479973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.480145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.480161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.480272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.480288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.480365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.480379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.480550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.480565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.480653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.480668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.480757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.480772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.480946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.480968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.481117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.481132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.481208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.481222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.481378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.481393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.481499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.481513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.481658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.481673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.481770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.481787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.481932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.481946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.482055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.482070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.482294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.482309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.482400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.482415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.482559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.482574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.482721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.482735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.482836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.482851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.483043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.483058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.483199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.483216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.483313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.483328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.483496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.483510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.483621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.483637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.483798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.483812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.832 [2024-11-29 13:13:08.483974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.832 [2024-11-29 13:13:08.483989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.832 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.484081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.484097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.484310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.484324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.484537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.484551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.484742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.484757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.484933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.484953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.485136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.485150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.485374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.485390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.485605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.485627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.485788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.485804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.485966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.485983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.486089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.486104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.486342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.486357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.486465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.486480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.486560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.486575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.486776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.486791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.486957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.486973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.487128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.487142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.487329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.487343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.487439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.487454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.487700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.487715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.487951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.487967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.488076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.488091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.488276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.488290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.488523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.488538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.488728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.488744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.488927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.488942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.489083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.489098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.489323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.489338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.489479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.489495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.489712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.489727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.489960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.489976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.490128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.490142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.490354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.490369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.490591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.490606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.490710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.490725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.490915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.490930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.491044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.491060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.491318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.833 [2024-11-29 13:13:08.491351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.833 qpair failed and we were unable to recover it. 00:29:08.833 [2024-11-29 13:13:08.491552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.491583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.491802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.491834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.492032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.492048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.492164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.492196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.492428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.492462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.492688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.492721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.492965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.492982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.493147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.493179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.493332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.493364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.493620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.493659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.493907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.493922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.494094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.494111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.494273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.494306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.494528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.494561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.494710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.494743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.494959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.494975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.495191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.495209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.495301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.495318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.495469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.495485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.495592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.495607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.495759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.495775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.495981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.495997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.496209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.496224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.496382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.496398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.496484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.496499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.496756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.496771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.497035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.497050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.497258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.497273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.497430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.497446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.497651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.497667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.497776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.497791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.498031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.498047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.498279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.498294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.498517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.498532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.498756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.498772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.498915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.498930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.499044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.499060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.499151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.834 [2024-11-29 13:13:08.499167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.834 qpair failed and we were unable to recover it. 00:29:08.834 [2024-11-29 13:13:08.499287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.499302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.499452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.499467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.499567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.499582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.499672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.499688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.499846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.499861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.500101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.500118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.500207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.500222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.500448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.500463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.500569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.500584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.500754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.500769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.500966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.500983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.501149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.501166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.501261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.501276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.501419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.501434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.501532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.501548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.501690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.501705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.501852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.501867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.501960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.501976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.502070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.502086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.502226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.502240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.502500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.502515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.502749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.502764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.502997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.503014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.503092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.503108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.503317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.503332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.503494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.503510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.503592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.503606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.503873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.503888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.504075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.504091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.504247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.504262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.504386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.504414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.504639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.504650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.504861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.504872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.835 [2024-11-29 13:13:08.505028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.835 [2024-11-29 13:13:08.505040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.835 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.505219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.505229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.505431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.505448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.505601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.505612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.505686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.505696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.505799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.505811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.506013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.506025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.506129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.506140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.506293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.506304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.506399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.506410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.506690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.506701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.506943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.506960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.507161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.507171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.507325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.507336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.507487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.507499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.507725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.507736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.507824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.507834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.508094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.508107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.508208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.508222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.508324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.508335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.508485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.508496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.508706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.508717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.508896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.508906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.509140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.509152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.509312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.509324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.509532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.509564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.509800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.509832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.510018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.510028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.510182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.510192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.510336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.510347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.510555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.510565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.510722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.510732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.510974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.510985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.511187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.511198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.511292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.511303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.511455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.511466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.511549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.511560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.511710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.511721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.511890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.836 [2024-11-29 13:13:08.511901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.836 qpair failed and we were unable to recover it. 00:29:08.836 [2024-11-29 13:13:08.511990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.512000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.512163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.512175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.512350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.512361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.512511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.512522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.512663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.512674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.512794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.512804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.512960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.512994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.513131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.513162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.513362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.513395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.513606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.513616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.513802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.513834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.514024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.514059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.514326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.514357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.514508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.514519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.514664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.514675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.514886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.514897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.515127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.515138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.515324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.515335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.515515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.515526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.515668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.515681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.515831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.515842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.516015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.516026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.516179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.516190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.516351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.516362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.516513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.516524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.516751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.516761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.516840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.516850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.516946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.516962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.517124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.517135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.517344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.517376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.517594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.517624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.517902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.517914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.518118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.518152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.518409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.518441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.518657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.518689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.518885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.518918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.519174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.519208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.519349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.519380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.837 qpair failed and we were unable to recover it. 00:29:08.837 [2024-11-29 13:13:08.519581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.837 [2024-11-29 13:13:08.519615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.519831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.519863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.520107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.520119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.520263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.520274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.520478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.520511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.520733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.520765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.520968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.521003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.521206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.521249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.521525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.521557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.521823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.521833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.522046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.522057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.522141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.522151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.522304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.522315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.522488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.522499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.522702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.522712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.522848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.522858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.523001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.523012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.523153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.523196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.523446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.523478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.523609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.523641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.523886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.523918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.524137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.524176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.524370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.524402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.524673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.524705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.524914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.524957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.525213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.525224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.525305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.525315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.525463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.525474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.525579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.525591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.525740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.525750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.525909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.525919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.526021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.526041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.526188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.526198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.526339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.526349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.526499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.526510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.526827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.526839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.526998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.527010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.527234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.527245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.838 [2024-11-29 13:13:08.527467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.838 [2024-11-29 13:13:08.527478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.838 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.527754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.527765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.527975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.527986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.528124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.528136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.528292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.528305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.528450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.528461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.528561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.528572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.528667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.528678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.528827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.528860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.529129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.529164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.529368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.529401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.529661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.529694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.529924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.529969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.530122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.530155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.530352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.530384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.530587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.530599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.530780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.530791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.531018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.531029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.531166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.531177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.531273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.531283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.531418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.531430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.531529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.531539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.531634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.531643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.531787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.531800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.531877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.531887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.532050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.532061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.532167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.532177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.532329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.532341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.532494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.532504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.532702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.532712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.532855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.532866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.533034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.533045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.533175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.533186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.533357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.533368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.533459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.533469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.533539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.533549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.533843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.533855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.533969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.533980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.534188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.839 [2024-11-29 13:13:08.534199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.839 qpair failed and we were unable to recover it. 00:29:08.839 [2024-11-29 13:13:08.534337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.534348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.534436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.534446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.534542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.534551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.534718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.534730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.534934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.534945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.535098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.535110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.535203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.535213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.535303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.535313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.535453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.535464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.535680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.535690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.535928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.535975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.536142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.536175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.536372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.536404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.536679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.536711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.536857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.536890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.537107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.537141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.537340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.537373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.537573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.537606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.537818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.537851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.538057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.538069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.538225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.538236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.538329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.538339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.538433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.538445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.538601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.538612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.538763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.538777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.539009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.539042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.539169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.539202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.539385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.539419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.539682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.539694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.539916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.539927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.540161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.540172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.540336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.540347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.540514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.540525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.540775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.540786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.541196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.541215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.541377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.840 [2024-11-29 13:13:08.541388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.840 qpair failed and we were unable to recover it. 00:29:08.840 [2024-11-29 13:13:08.541538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.541550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.541643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.541653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.541799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.541810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.541991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.542005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.542079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.542091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.542318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.542330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.542425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.542435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.542525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.542537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.542771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.542784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.543024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.543037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.543178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.543189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.543351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.543362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.543591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.543603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.543735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.543746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.543933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.543944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.544113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.544124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.544290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.544301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.544435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.544446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.544664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.544675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.544821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.544832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.544990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.545002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.545148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.545159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.545225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.545234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.545389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.545399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.545542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.545553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.545814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.545824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.545891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.545900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.545999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.546010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.546160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.546178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.546324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.546335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.546434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.546445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.546536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.546546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.546695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.546707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.546956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.546968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.547119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.547130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.547280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.547291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.547375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.547385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.547528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.547539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.547710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.547721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.547895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.547907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.548113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.548124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-11-29 13:13:08.548271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-11-29 13:13:08.548283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.548364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.548374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.548517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.548528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.548610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.548620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.548790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.548800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.548970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.548982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.549062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.549073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.549235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.549246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.549394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.549405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.549470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.549480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.549644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.549654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.549789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.549800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.549944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.549961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.550051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.550062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.550140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.550151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.550304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.550315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.550514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.550525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.550657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.550669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.550873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.550884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.551099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.551110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.551287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.551299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.551441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.551451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.551652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.551664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.551828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.551840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.551923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.551933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.552153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.552165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.552334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.552345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.552489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.552501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.552682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.552693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.552760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.552770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.553040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.553052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.553195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.553206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.553298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.553308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.553395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.553407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.553554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.553565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.553659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.553669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.553763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.553774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.553974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.553986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.554150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.554161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.554291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.554302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.554448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.554459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.554655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-11-29 13:13:08.554666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-11-29 13:13:08.554794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.554804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.554955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.554967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.555134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.555146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.555224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.555234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.555432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.555443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.555664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.555676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.555882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.555892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.555982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.555993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.556096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.556105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.556203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.556214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.556369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.556379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.556597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.556608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.556846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.556857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.557001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.557012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.557102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.557112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.557207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.557217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.557418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.557429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.557574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.557586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.557810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.557821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.558082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.558105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.558277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.558288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.558381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.558391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.558544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.558556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.558705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.558717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.558865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.558876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.558963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.558976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.559130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.559141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.559237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.559248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.559330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.559340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.559491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.559503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.559651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.559661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.559801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.559813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.559963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.559975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.560141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.560152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.560230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.560240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.560326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.560336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.560479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.560490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.560754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.560765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.560922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.560933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.561175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.561187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.561339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.561350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.561430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.561441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.561604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.561615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.561853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.561865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.562015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-11-29 13:13:08.562033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-11-29 13:13:08.562131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.562142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.562293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.562304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.562438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.562449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.562544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.562555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.562787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.562799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.562896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.562907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.563169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.563182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.563342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.563353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.563447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.563458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.563610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.563621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.563881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.563893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.563982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.563994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.564234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.564248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.564387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.564398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.564489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.564500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.564577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.564587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.564658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.564668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.564890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.564901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.565134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.565146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.565393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.565404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.565654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.565668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.565809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.565820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.566019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.566030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.566163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.566174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.566268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.566280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.566349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.566359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.566562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.566573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.566715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.566727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.566872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.566884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.567107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.567119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.567345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.567357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.567506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.567517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.567735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.567747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.567879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.567891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.568101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.568113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.568204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.568215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.568413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.568424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.568584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.568596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.568852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.568864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-11-29 13:13:08.569066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-11-29 13:13:08.569078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.569250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.569261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.569410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.569422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.569592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.569603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.569801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.569813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.570039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.570051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.570336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.570348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.570486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.570497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.570695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.570707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.570937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.570953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.571170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.571182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.571332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.571343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.571494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.571505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.571736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.571748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.571899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.571910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.572071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.572083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.572281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.572292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.572464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.572476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.572705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.572716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.572860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.572872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.573022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.573034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.573168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.573181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.573331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.573342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.573441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.573452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.573601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.573612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.573777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.573788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.574010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.574022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.574190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.574201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.574424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.574436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.574571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.574581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.574679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.574690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.574921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.574933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.575161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.575173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.575398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.575409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.575568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.575579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.575738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.575749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.575912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.575924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.576149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.576160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.576328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.576340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.576432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.576442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.576541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.576552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.576695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.576706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.576853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.576865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.577007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.577019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.577150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-11-29 13:13:08.577161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-11-29 13:13:08.577297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.577308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.577500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.577511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.577611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.577622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.577778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.577790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.578030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.578041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.578288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.578299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.578430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.578442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.578642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.578653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.578806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.578818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.578953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.578965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.579108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.579120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.579343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.579354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.579436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.579446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.579647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.579658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.579846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.579857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.580066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.580077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.580210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.580221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.580314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.580325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.580417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.580427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.580693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.580703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.580804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.580815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.581043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.581054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.581187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.581197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.581428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.581438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.581611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.581621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.581842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.581853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.582078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.582090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.582221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.582232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.582444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.582455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.582678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.582689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.582777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.582787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.583002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.583013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.583232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.583242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.583325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.583335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.583481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.583491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.583701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.583711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.583861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.583872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.584135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.584146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.584294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.584305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.584504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.584515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.584723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.584735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.584886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.584897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.585124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.585136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.585279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.585292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.585437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-11-29 13:13:08.585448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-11-29 13:13:08.585691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.585702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.585906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.585917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.586052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.586063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.586310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.586320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.586466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.586476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.586648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.586659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.586791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.586801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.586966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.586977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.587078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.587088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.587286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.587296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.587476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.587487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.587702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.587713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.587919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.587930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.588173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.588184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.588410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.588420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.588560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.588570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.588792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.588802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.588972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.588984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.589117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.589127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.589354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.589365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.589464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.589475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.589685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.589716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.589923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.589962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.590218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.590251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.590488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.590498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.590710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.590720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.590938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.590953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.591138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.591149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.591332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.591342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.591568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.591578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.591671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.591681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.591830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.591840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.592042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.592053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.592212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.592222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.592391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.592402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.592485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.592495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.592645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.592655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.592853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.592864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.593019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.593032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.593259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.593270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.593446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.593456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.593680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.593690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.593853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.593863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.593961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.593971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-11-29 13:13:08.594120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-11-29 13:13:08.594131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.594284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.594295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.594384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.594394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.594554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.594564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.594761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.594772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.594955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.594966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.595164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.595174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.595264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.595274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.595449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.595460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.595536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.595546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.595706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.595717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.595892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.595903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.596172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.596183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.596395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.596405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.596524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.596535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.596613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.596623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.596818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.596828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.596979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.596990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.597209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.597219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.597424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.597435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.597654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.597665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.597889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.597900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.598054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.598065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.598199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.598210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.598304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.598314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.598486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.598497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.598594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.598603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.598701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.598711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.598773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.598783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.598918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.598929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.599134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.599145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.599345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.599355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.599502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.599512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.599655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.599666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.599814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.599826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.599968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.599979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.600200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.600210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.600380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.600391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.600598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.600609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.600694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.600704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.600834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-11-29 13:13:08.600844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-11-29 13:13:08.601071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.601082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.601245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.601255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.601399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.601410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.601616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.601647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.601899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.601931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.602126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.602159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.602426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.602458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.602675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.602707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.602927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.602968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.603112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.603143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.603418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.603449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.603729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.603761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.604042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.604078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.604226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.604236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.604425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.604436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.604654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.604686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.604896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.604929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-11-29 13:13:08.605128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-11-29 13:13:08.605162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.605382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.605392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.605587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.605598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.605753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.605764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.605910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.605920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.606013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.606024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.606240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.606252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.606344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.606354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.606420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.606430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.606580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.606591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.606763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.606773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.606945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.606960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.607208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.607218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.607419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.607430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.607636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.607646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.607855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.607866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.608016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.608030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.608226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.608238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.608400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.608411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.608557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.608582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.608774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.608806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.608992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.609025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.609161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.609172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.609313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.609323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.609470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.130 [2024-11-29 13:13:08.609481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.130 qpair failed and we were unable to recover it. 00:29:09.130 [2024-11-29 13:13:08.609623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.609634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.609727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.609736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.609945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.609991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.610260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.610292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.610579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.610610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.610819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.610851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.611084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.611095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.611190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.611200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.611409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.611420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.611565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.611575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.611728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.611739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.611828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.611838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.612003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.612015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.612231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.612261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.612465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.612497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.612698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.612729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.612969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.612980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.613128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.613139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.613285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.613296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.613389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.613399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.613539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.613550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.613634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.613644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.613725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.613734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.613874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.613885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.614035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.614046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.614113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.614122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.614197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.614207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.614376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.614387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.614531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.614542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.614706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.614717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.614799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.614808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.615030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.615042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.615241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.615252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.615474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.615485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.615668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.615700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.615918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.615960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.616102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.616135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.616284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.131 [2024-11-29 13:13:08.616295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.131 qpair failed and we were unable to recover it. 00:29:09.131 [2024-11-29 13:13:08.616438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.616448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.616541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.616551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.616798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.616830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.617019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.617053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.617287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.617320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.617518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.617550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.617809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.617841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.618087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.618123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.618373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.618406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.618679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.618711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.618992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.619026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.619207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.619217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.619452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.619484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.619752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.619785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.619983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.620017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.620150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.620182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.620424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.620456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.620646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.620677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.620956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.620989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.621280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.621291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.621421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.621432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.621669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.621680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.621937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.622005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.622171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.622203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.622386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.622417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.622686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.622719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.623048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.623082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.623193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.623205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.623368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.623379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.623553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.623564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.623698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.623709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.623885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.623917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.624118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.624152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.624331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.624367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.624513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.624545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.624719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.624752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.624967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.624999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.625135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.625146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.132 qpair failed and we were unable to recover it. 00:29:09.132 [2024-11-29 13:13:08.625377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.132 [2024-11-29 13:13:08.625409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.625686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.625718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.626031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.626064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.626251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.626282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.626480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.626512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.626715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.626746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.627022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.627056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.627210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.627241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.627484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.627516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.627738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.627770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.627902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.627913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.628129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.628162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.628403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.628434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.628773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.628815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.629014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.629025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.629248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.629259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.629474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.629485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.629669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.629679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.629819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.629830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.629976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.629987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.630211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.630222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.630322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.630331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.630400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.630409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.630657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.630668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.630865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.630876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.631073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.631084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.631234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.631245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.631475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.631507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.631700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.631731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.631986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.632020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.632312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.632344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.632581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.632613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.632882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.632914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.633196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.633229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.633371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.633381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.633602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.633615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.633816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.633848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.634039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.634072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.634268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.133 [2024-11-29 13:13:08.634279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.133 qpair failed and we were unable to recover it. 00:29:09.133 [2024-11-29 13:13:08.634456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.634488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.634780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.634812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.635063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.635096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.635287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.635319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.635513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.635545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.635730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.635761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.636042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.636054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.636264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.636274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.636425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.636436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.636698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.636730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.636866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.636897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.637165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.637198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.637362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.637373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.637586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.637607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.637753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.637774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.637940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.637977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.638166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.638199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.638455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.638487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.638627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.638658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.638931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.638977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.639132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.639164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.639359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.639391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.639649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.639680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.639892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.639936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.640094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.640105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.640272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.640297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.640492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.640524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.640815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.640846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.641040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.641051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.641203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.641213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.641489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.641521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.134 [2024-11-29 13:13:08.641706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.134 [2024-11-29 13:13:08.641738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.134 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.641966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.642000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.642186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.642197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.642282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.642292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.642387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.642396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.642548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.642561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.642660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.642669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.642807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.642818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.642985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.642996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.643250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.643281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.643462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.643494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.643684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.643716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.643911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.643921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.644073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.644111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.644298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.644330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.644476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.644508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.644700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.644749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.644928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.644973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.645171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.645202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.645464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.645496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.645740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.645771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.646005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.646017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.646157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.646167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.646397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.646429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.646664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.646696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.646960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.646993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.647184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.647194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.647336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.647367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.647553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.647586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.647781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.647813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.648036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.648048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.648233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.648264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.648459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.648491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.648706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.648738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.648938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.648954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.649132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.649143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.649294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.649306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.649454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.649465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.649549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.135 [2024-11-29 13:13:08.649558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.135 qpair failed and we were unable to recover it. 00:29:09.135 [2024-11-29 13:13:08.649706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.649716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.649880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.649890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.650035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.650046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.650135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.650144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.650329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.650339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.650482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.650493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.650597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.650609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.650760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.650770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.650850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.650860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.650951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.650961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.651094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.651105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.651265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.651276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.651379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.651390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.651561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.651571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.651723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.651734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.651961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.651972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.652213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.652245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.652436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.652468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.652658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.652690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.652955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.652966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.653149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.653160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.653333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.653365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.653671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.653703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.653819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.653849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.653972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.653984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.654123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.654134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.654351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.654362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.654514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.654525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.654775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.654786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.654851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.654860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.655069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.655080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.655253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.655265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.655494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.655526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.655766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.655839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.655995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.656038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.656248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.656263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.656369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.656384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.656536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.136 [2024-11-29 13:13:08.656551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.136 qpair failed and we were unable to recover it. 00:29:09.136 [2024-11-29 13:13:08.656784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.656799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.656964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.656979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.657145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.657160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.657399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.657413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.657613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.657627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.657808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.657823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.658030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.658063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.658306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.658338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.658485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.658527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.658746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.658777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.658976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.658991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.659103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.659118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.659204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.659218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.659312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.659326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.659413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.659426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.659619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.659633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.659782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.659797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.659985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.660000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.660171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.660186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.660423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.660454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.660712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.660743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.661036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.661052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.661262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.661277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.661482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.661497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.661699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.661730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.662049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.662082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.662277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.662291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.662399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.662413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.662575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.662590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.662746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.662760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.662922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.662937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.663226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.663241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.663349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.663363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.663510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.663524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.663728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.663742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.664007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.664043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.664222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.664238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.664390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.664404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.137 [2024-11-29 13:13:08.664611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.137 [2024-11-29 13:13:08.664626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.137 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.664727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.664742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.664952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.664968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.665139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.665153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.665312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.665327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.665427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.665441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.665647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.665662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.665814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.665828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.666107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.666140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.666275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.666306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.666505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.666537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.666805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.666836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.667094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.667109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.667288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.667302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.667479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.667510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.667760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.667791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.668032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.668048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.668263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.668294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.668502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.668533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.668719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.668749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.668943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.668983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.669179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.669209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.669450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.669481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.669698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.669747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.669968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.670011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.670210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.670251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.670379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.670395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.670607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.670624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.670852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.670867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.670985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.671001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.671155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.671169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.671351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.671369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.671514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.671529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.671711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.671726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.671899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.671913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.672080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.672096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.672251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.672266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.672374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.672388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.672586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.672601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.138 [2024-11-29 13:13:08.672779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.138 [2024-11-29 13:13:08.672794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.138 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.672875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.672889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.673034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.673049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.673219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.673234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.673391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.673406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.673578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.673618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.673766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.673797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.673920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.673958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.674230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.674262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.674476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.674507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.674769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.674800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.675101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.675116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.675220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.675241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.675390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.675405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.675674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.675706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.675974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.676007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.676208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.676239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.676433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.676463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.676727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.676759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.676981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.677015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.677211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.677243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.677532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.677547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.677650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.677665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.677895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.677909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.678060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.678075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.678230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.678244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.678340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.678353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.678493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.678508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.678706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.678737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.678929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.678991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.679311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.679343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.679534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.679566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.679853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.679885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.680085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.680117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.139 [2024-11-29 13:13:08.680317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.139 [2024-11-29 13:13:08.680348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.139 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.680543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.680575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.680788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.680820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.681026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.681042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.681202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.681216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.681334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.681352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.681500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.681515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.681679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.681712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.681912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.681944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.682102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.682135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.682356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.682370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.682554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.682569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.682809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.682840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.683102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.683136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.683410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.683425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.683684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.683698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.683849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.683864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.684066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.684099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.684240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.684271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.684534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.684606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.684899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.684945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.685105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.685115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.685365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.685396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.685528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.685559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.685752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.685783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.686027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.686059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.686303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.686333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.686529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.686561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.686765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.686796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.686986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.686997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.687204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.687236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.687359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.687390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.687575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.687617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.687905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.687937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.688193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.688227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.688540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.688571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.688771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.688802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.688934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.688977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.689165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.689175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.140 qpair failed and we were unable to recover it. 00:29:09.140 [2024-11-29 13:13:08.689326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.140 [2024-11-29 13:13:08.689351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.689539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.689571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.689795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.689827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.690067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.690078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.690213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.690223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.690320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.690330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.690480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.690490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.690686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.690697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.690796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.690805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.691001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.691013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.691181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.691191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.691336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.691346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.691544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.691555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.691749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.691760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.691984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.691996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.692156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.692167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.692307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.692317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.692419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.692429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.692580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.692591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.692807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.692839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.693114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.693158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.693351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.693366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.693516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.693530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.693812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.693844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.694009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.694045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.694242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.694257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.694417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.694431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.694622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.694654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.694851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.694883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.695147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.695181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.695356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.695371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.695479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.695494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.695588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.695602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.695777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.695795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.695996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.696030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.696188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.696220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.696501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.696533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.696731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.696763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.696944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.141 [2024-11-29 13:13:08.696965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.141 qpair failed and we were unable to recover it. 00:29:09.141 [2024-11-29 13:13:08.697154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.697186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.697398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.697430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.697678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.697711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.697826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.697858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.698131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.698166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.698409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.698423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.698698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.698713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.698881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.698896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.699153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.699168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.699271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.699285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.699445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.699460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.699619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.699634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.699784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.699799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.700031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.700046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.700218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.700233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.700411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.700443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.700643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.700675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.700893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.700925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.701208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.701223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.701380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.701395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.701482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.701496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.701701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.701737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.701928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.701945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.702131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.702147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.702390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.702423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.702736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.702768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.703041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.703076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.703287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.703319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.703578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.703611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.703806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.703838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.704113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.704146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.704270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.704284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.704446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.704461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.704740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.704754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.704936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.704959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.705045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.705059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.705161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.705179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.705282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.705297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.142 qpair failed and we were unable to recover it. 00:29:09.142 [2024-11-29 13:13:08.705446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.142 [2024-11-29 13:13:08.705460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.705561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.705574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.705817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.705832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.705986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.706002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.706105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.706120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.706327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.706342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.706441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.706458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.706552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.706565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.706746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.706760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.706991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.707007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.707167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.707182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.707276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.707290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.707394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.707409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.707623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.707638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.707846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.707859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.708086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.708102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.708260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.708275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.708478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.708510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.708753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.708785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.708980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.709013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.709217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.709249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.709451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.709466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.709698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.709712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.709925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.709940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.710180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.710194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.710345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.710360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.710570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.710602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.710817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.710848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.710991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.711025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.711258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.711272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.711431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.711445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.711673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.711688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.711894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.711909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.143 [2024-11-29 13:13:08.712101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.143 [2024-11-29 13:13:08.712116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.143 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.712272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.712287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.712390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.712405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.712680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.712697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.712848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.712863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.713015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.713030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.713193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.713208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.713318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.713333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.713610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.713625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.713862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.713894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.714105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.714138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.714328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.714361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.714640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.714655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.714828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.714843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.714999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.715014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.715115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.715129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.715233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.715248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.715457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.715472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.715703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.715717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.715926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.715940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.716129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.716144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.716300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.716314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.716468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.716501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.716780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.716812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.717015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.717049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.717265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.717297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.717485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.717499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.717678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.717693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.717863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.717895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.718117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.718151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.718393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.718464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.718692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.718728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.718944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.719003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.719215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.719226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.719422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.719433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.719591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.719607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.719769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.719786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.720012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.720032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.720133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.144 [2024-11-29 13:13:08.720144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.144 qpair failed and we were unable to recover it. 00:29:09.144 [2024-11-29 13:13:08.720366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.720378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.720486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.720497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.720724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.720734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.720931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.720942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.721024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.721037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.721262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.721273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.721444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.721455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.721690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.721722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.721929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.721971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.722175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.722207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.722483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.722514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.722804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.722836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.723083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.723117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.723389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.723421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.723621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.723653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.723795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.723827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.724112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.724146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.724391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.724423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.724644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.724677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.724932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.724973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.725292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.725303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.725406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.725418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.725664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.725674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.725823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.725833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.726061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.726072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.726172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.726182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.726402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.726412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.726505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.726516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.726742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.726752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.726917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.726928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.727091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.727102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.727308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.727347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.727579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.727610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.727861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.727893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.728095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.728128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.728390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.728400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.728554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.728564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.728725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.728736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.145 qpair failed and we were unable to recover it. 00:29:09.145 [2024-11-29 13:13:08.728896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.145 [2024-11-29 13:13:08.728907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.729057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.729069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.729275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.729285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.729438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.729449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.729692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.729703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.729922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.729933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.730157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.730168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.730323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.730333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.730483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.730494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.730740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.730752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.731002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.731013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.731156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.731167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.731264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.731274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.731369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.731379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.731480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.731491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.731553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.731563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.731643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.731654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.731906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.731917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.732153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.732164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.732308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.732320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.732540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.732551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.732702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.732712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.732859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.732870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.733047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.733059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.733214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.733224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.733424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.733435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.733598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.733608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.733752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.733762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.733855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.733865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.734052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.734070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.734240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.734251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.734444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.734456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.734612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.734623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.734773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.734786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.734935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.734953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.735125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.735136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.735392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.735403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.735639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.735649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.146 [2024-11-29 13:13:08.735880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.146 [2024-11-29 13:13:08.735891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.146 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.736084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.736095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.736297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.736328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.736601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.736633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.736850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.736882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.737165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.737199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.737393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.737424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.737621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.737653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.737838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.737870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.738148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.738182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.738311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.738322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.738539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.738571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.738777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.738809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.739066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.739100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.739249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.739259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.739474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.739505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.739833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.739865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.740067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.740101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.740254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.740287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.740429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.740461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.740735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.740767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.740897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.740929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.741171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.741182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.741394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.741405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.741586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.741597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.741832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.741865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.742177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.742210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.742355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.742365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.742589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.742621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.742814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.742846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.743042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.743075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.743251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.743262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.743401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.743412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.743527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.743559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.743827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.743859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.744121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.744161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.744341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.744352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.744441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.744451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.744549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.147 [2024-11-29 13:13:08.744558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.147 qpair failed and we were unable to recover it. 00:29:09.147 [2024-11-29 13:13:08.744713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.744724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.744852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.744863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.745000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.745011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.745112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.745122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.745273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.745284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.745373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.745382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.745535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.745546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.745630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.745640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.745715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.745726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.745929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.745940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.746068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.746079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.746226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.746237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.746403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.746414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.746499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.746509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.746700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.746711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.746845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.746856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.746986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.746998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.747099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.747111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.747256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.747266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.747360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.747370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.747462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.747472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.747821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.747831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.747996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.748007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.748091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.748101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.748299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.748310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.748460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.748471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.748725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.748736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.748814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.748823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.748972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.748982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.749079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.749090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.749245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.749255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.749386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.749397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.749484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.749494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.148 [2024-11-29 13:13:08.749597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.148 [2024-11-29 13:13:08.749607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.148 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.749700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.749710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.749837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.749848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.749977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.749990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.750157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.750168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.750326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.750338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.750440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.750451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.750683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.750693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.750861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.750873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.751115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.751149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.751369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.751401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.751596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.751629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.751884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.751916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.752125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.752167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.752371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.752382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.752474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.752484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.752664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.752675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.752767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.752778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.753001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.753012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.753160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.753171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.753386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.753418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.753750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.753783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.753979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.754012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.754201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.754232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.754380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.754390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.754524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.754535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.754703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.754714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.754817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.754828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.754992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.755004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.755188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.755198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.755302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.755312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.755408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.755419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.755575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.755585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.755735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.755745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.755886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.755897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.756067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.756078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.756277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.756288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.756445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.756455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.756705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.149 [2024-11-29 13:13:08.756737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.149 qpair failed and we were unable to recover it. 00:29:09.149 [2024-11-29 13:13:08.756936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.756978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.757113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.757144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.757344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.757355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.757557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.757588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.757836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.757873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.758068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.758101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.758292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.758303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.758394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.758406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.758631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.758642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.758814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.758825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.759019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.759031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.759235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.759267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.759460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.759492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.759762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.759794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.760067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.760101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.760242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.760274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.760425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.760436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.760531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.760542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.760714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.760725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.760896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.760907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.761109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.761120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.761324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.761335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.761414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.761424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.761579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.761590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.761801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.761834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.762033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.762067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.762257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.762289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.762435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.762446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.762546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.762556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.762783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.762793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.762995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.763007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.763208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.763219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.763307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.763317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.763460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.763471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.763557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.763567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.763715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.763726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.763891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.763903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.764063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.764074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-11-29 13:13:08.764155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.150 [2024-11-29 13:13:08.764165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.764298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.764309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.764381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.764391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.764545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.764555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.764633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.764643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.764730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.764740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.764922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.764935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.765054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.765065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.765140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.765149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.765277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.765288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.765509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.765520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.765821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.765832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.766031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.766043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.766242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.766253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.766402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.766413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.766691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.766723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.766918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.766963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.767211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.767243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.767421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.767452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.767660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.767693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.768016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.768051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.768314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.768325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.768428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.768441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.768713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.768724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.768941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.768955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.769116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.769127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.769325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.769358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.769553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.769585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.769857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.769889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.770110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.770144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.770319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.770350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.770497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.770538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.770708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.770719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.770983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.771016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.771276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.771309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.771505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.771537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.771800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.771832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.772036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.772073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-11-29 13:13:08.772219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.151 [2024-11-29 13:13:08.772230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.772322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.772332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.772419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.772428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.772578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.772589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.772720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.772730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.772990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.773002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.773220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.773232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.773456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.773467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.773671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.773684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.773911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.773943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.774158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.774190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.774338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.774371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.774643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.774654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.774798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.774809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.774951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.774962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.775187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.775198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.775409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.775441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.775591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.775601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.775822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.775853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.776068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.776102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.776328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.776361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.776485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.776496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.776699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.776709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.776861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.776872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.776964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.776975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.777056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.777066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.777149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.777159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.777323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.777334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.777431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.777440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.777644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.777655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.777828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.777839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.777988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.778015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.778152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.778183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.778404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.778436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.778580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.778591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-11-29 13:13:08.778730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-11-29 13:13:08.778741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.778964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.778975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.779106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.779117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.779194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.779204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.779300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.779310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.779408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.779418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.779633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.779644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.779748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.779759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.779903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.779934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.780214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.780247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.780385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.780417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.780633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.780645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.780791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.780802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.780957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.780972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.781061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.781071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.781155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.781165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.781384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.781395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.781460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.781470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.781628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.781638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.781788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.781799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.781968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.781979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.782141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.782152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.782299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.782310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.782441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.782451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.782629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.782640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.782855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.782866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.783073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.783084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.783258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.783269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.783349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.783359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.783606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.783616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.783844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.783854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.784069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.784081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.784181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.784191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.784276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.784286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.784375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.784385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.784533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.784544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.784779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.784811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-11-29 13:13:08.785053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-11-29 13:13:08.785086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.785298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.785329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.785591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.785602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.785808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.785819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.785885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.785895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.786123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.786134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.786344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.786354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.786483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.786494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.786575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.786585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.786745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.786755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.786894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.786905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.787112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.787123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.787370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.787381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.787579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.787589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.787729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.787740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.787998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.788030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.788180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.788217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.788493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.788524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.788787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.788819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.789069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.789101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.789300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.789331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.789436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.789449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.789646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.789657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.789796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.789806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.790026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.790037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.790171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.790181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.790379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.790410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.790678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.790711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.790959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.790992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.791247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.791279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.791493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.791528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.791618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.791627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.791849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.791860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.792069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.792080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.792278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.792309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.792589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.792621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.792864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.792895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.793125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.793159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-11-29 13:13:08.793349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-11-29 13:13:08.793381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.793560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.793570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.793706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.793735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.793970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.794003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.794192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.794224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.794591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.794662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.794883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.794918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.795085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.795118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.795363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.795395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.795644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.795675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.795865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.795897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.796117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.796150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.796268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.796282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.796506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.796520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.796775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.796790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.797053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.797068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.797211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.797225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.797470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.797500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.797692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.797733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.797911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.797943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.798217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.798248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.798453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.798468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.798621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.798635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.798865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.798896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.799092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.799124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.799383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.799416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.799631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.799661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.799853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.799885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.800082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.800115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.800383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.800414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.800689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.800704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.800960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.800975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.801131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.801146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.801398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.801429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.801623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.801654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.801871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.801903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.802171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.802204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.802448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.802480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.802738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-11-29 13:13:08.802752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-11-29 13:13:08.802998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.803014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.803249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.803264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.803494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.803509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.803758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.803773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.803913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.803928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.804181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.804196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.804412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.804444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.804689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.804720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.804971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.805005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.805272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.805305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.805596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.805627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.805845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.805877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.806141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.806175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.806421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.806452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.806714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.806729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.806877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.806891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.807113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.807147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.807421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.807454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.807714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.807748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.807963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.808002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.808271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.808304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.808608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.808641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.808899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.808930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.809167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.809200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.809471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.809502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.809677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.809692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.809929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.809975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.810166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.810197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.810412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.810445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.810711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.810726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.810918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.810933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.811122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.811157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.811452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.811488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.811687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.811719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.811909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.811940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.812151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.812183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.812427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.812471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.812625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.812635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-11-29 13:13:08.812847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-11-29 13:13:08.812880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.813138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.813172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.813458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.813468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.813667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.813678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.813767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.813777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.813910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.813921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.814143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.814154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.814323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.814334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.814484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.814501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.814730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.814761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.814967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.815001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.815199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.815232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.815472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.815487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.815692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.815707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.815943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.815965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.816130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.816145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.816320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.816336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.816526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.816557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.816768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.816799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.817090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.817126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.817313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.817344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.817588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.817626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.817860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.817875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.818027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.818043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.818217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.818232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.818492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.818507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.818713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.818726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.818877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.818888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.819145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.819178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.819456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.819489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.819738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.819750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.819963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.819974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.820174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.820186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.820381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-11-29 13:13:08.820393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-11-29 13:13:08.820588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.820599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.820764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.820775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.820917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.820929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.821016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.821027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.821197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.821207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.821386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.821397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.821568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.821579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.821804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.821837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.822037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.822070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.822338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.822376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.822581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.822592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.822754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.822765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.822998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.823032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.823278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.823310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.823525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.823575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.823913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.823996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.824288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.824329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.824607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.824619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.824788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.824799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.824952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.824964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.825115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.825126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.825326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.825337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.825538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.825550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.825620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.825630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.825862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.825873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.826066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.826077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.826221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.826232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.826483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.826494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.826631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.826642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.826788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.826799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.826958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.826970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.827141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.827152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.827302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.827313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.827464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.827476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.827662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.827673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.827748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.827758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.828017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.828028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.828300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.828333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-11-29 13:13:08.828618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-11-29 13:13:08.828651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.828899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.828931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.829076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.829109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.829361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.829373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.829571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.829582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.829806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.829837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.830108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.830143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.830329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.830371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.830540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.830551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.830760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.830794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.831062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.831096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.831291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.831323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.831508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.831519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.831732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.831742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.831817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.831827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.831967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.831977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.832124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.832138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.832385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.832396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.832550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.832561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.832652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.832662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.832751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.832761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.832975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.832988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.833220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.833253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.833389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.833429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.833507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.833518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.833658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.833669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.833892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.833903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.834114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.834157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.834394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.834427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.834671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.834704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.834961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.834973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.835231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.835242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.835443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.835454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.835620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.835632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.835868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.835899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.836030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.836063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.836349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.836381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.836625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.836658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-11-29 13:13:08.836867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-11-29 13:13:08.836899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.837018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.837052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.837268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.837299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.837473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.837506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.837687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.837720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.837934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.837978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.838191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.838223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.838405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.838437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.838625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.838658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.838918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.838962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.839094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.839127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.839322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.839354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.839573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.839609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.839784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.839795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.839951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.839985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.840170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.840203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.840403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.840435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.840677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.840688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.840842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.840855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.841078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.841089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.841304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.841315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.841460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.841471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.841706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.841718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.841878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.841889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.841973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.841984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.842053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.842064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.842148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.842159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.842299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.842309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.842534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.842546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.842677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.842687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.842915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.842926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.843170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.843182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.843413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.843425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.843571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.843583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.843809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.843841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.844133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.844167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.844360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.844394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.844566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.844578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.844798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-11-29 13:13:08.844809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-11-29 13:13:08.844903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.844914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.845085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.845119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.845234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.845266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.845449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.845481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.845690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.845722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.845976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.846010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.846266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.846298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.846430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.846462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.846645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.846655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.846876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.846888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.846974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.846986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.847125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.847157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.847426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.847460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.847579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.847612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.847873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.847885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.848107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.848118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.848295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.848306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.848532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.848565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.848874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.848906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.849113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.849154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.849405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.849438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.849666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.849676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.849759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.849770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.849849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.849860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.850110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.850143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.850414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.850446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.850732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.850764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.851065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.851099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.851229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.851272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.851472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.851482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.851640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.851651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.851852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.851885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.852131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.852165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.852351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.852384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.852663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.852695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.852969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.853002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.853144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-11-29 13:13:08.853177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-11-29 13:13:08.853423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.853454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.853630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.853661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.853904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.853935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.854193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.854226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.854439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.854471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.854735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.854767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.854886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.854897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.855046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.855057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.855247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.855258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.855488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.855521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.855662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.855673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.855863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.855894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.856113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.856148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.856452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.856484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.856735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.856746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.856889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.856899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.857084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.857117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.857388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.857419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.857604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.857615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.857847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.857880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.858196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.858230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.858439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.858472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.858734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.858748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.858968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.858979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.859044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.859082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.859385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.859418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.859615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.859647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.859936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.859978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.860111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.860143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.860385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.860418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.860603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.860635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.860815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.860844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.861064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.861076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.861243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.861254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-11-29 13:13:08.861431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-11-29 13:13:08.861464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.861650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.861682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.861973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.862007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.862279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.862312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.862594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.862626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.862818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.862850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.863026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.863060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.863175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.863207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.863369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.863402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.863593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.863625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.863881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.863914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.864172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.864204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.864409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.864442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.864644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.864677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.864932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.864973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.865198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.865232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.865428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.865461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.865656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.865667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.865840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.865872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.866124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.866157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.866456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.866489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.866760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.866806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.866993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.867006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.867230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.867242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.867380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.867413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.867533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.867566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.867748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.867781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.867971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.868004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.868266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.868305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.868439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.868451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.868595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.868609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.868707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.868718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.868862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.868894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.869096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.869129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.869309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.869342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.869589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.869599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.869763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.869795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.869993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.870027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-11-29 13:13:08.870217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-11-29 13:13:08.870248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.870556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.870589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.870839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.870871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.870989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.871024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.871170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.871203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.871464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.871496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.871742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.871775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.871970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.872004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.872145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.872179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.872378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.872411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.872617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.872649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.872941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.872983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.873104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.873137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.873356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.873390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.873585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.873617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.873848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.873859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.873935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.873946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.874198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.874231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.874450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.874482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.874730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.874762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.874940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.874986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.875124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.875156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.875401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.875433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.875630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.875663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.875853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.875863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.875940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.875957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.876117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.876128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.876296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.876306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.876444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.876455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.876545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.876556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.876695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.876708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.876856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.876867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.877079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.877113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.877259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.877291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.877493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.877538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.877761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.877772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.877999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.878011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.878167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.878178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.878394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-11-29 13:13:08.878405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-11-29 13:13:08.878632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.878665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.878873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.878905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.879120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.879155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.879415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.879447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.879674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.879686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.879889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.879901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.880061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.880072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.880245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.880256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.880456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.880467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.880597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.880629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.880822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.880854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.880976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.881010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.881144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.881176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.881305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.881336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.881599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.881631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.881820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.881864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.882107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.882118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.882291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.882302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.882475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.882507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.882686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.882718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.882850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.882882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.883096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.883131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.883422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.883455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.883680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.883713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.883986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.884020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.884301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.884335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.884534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.884566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.884810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.884853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.884995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.885007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.885233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.885244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.885389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.885400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.885539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.885552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.885810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.885843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.886085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.886119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.886315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.886348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.886546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.886579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.886717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.886750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.886932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.886975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-11-29 13:13:08.887200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-11-29 13:13:08.887232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.887415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.887448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.887706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.887717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.887942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.887959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.888109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.888121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.888256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.888289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.888497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.888529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.888724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.888735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.888953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.888964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.889096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.889106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.889239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.889251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.889414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.889446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.889729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.889761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.889956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.889968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.890115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.890126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.890278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.890290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.890422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.890433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.890650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.890661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.890861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.890871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.891113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.891147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.891449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.891519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.891724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.891760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.891959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.891976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.892142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.892174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.892450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.892482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.892618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.892652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.892841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.892857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.893082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.893117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.893307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.893341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.893545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.893579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.893847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.893880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.894179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.894218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.894352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.894388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.894644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.894678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-11-29 13:13:08.894873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-11-29 13:13:08.894890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.895122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.895138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.895290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.895306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.895475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.895507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.895770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.895803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.896068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.896086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.896252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.896284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.896580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.896616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.896907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.896966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.897174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.897210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.897358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.897391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.897661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.897693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.897973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.898008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.898204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.898240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.898525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.898557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.898674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.898685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.898884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.898914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.899066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.899099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.899329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.899360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.899550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.899561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.899706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.899717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.899815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.899826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.899989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.900000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.900167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.900178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.900326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.900337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.900487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.900499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.900586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.900597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.900762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.900795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.900991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.901024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.901250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.901282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.901527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.901559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.901707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.901739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.901978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.901989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.902152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.902185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.902392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.902424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.902626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.902637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.902809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.902821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.903000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-11-29 13:13:08.903034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-11-29 13:13:08.903257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.903289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.903471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.903482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.903645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.903675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.903793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.903825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.904079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.904113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.904388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.904421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.904663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.904674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.904833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.904844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.905058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.905092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.905252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.905284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.905501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.905534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.905747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.905759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.905937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.905978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.906251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.906285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.906434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.906467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.906665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.906708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.906920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.906969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.907247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.907279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.907499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.907531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.907777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.907809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.908000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.908035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.908238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.908271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.908539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.908572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.908864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.908902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.909171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.909205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.909538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.909572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.909688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.909720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.909991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.910029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.910231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.910242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.910397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.910409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.910494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.910505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.910673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.910685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.910764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.910775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.910992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.911003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.911088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.911099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.911346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.911357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.911574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.911586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.911716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-11-29 13:13:08.911728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-11-29 13:13:08.911908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.911940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.912082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.912115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.912296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.912328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.912553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.912586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.912837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.912848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.913055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.913089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.913220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.913251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.913453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.913485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.913683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.913717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.913915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.913926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.914074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.914086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.914223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.914234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.914327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.914339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.914581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.914613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.914878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.914909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.915121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.915155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.915369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.915401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.915598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.915612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.915787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.915821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.916010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.916044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.916314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.916347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.916591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.916624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.916811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.916843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.917035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.917070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.917314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.917347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.917544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.917576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.917787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.917798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.917967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.917979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.918199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.918212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.918369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.918401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.918684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.918717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.919021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.919054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.919325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.919358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.919543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.919576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.919836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.919848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.920026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.920038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.920196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.920228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.920477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.920509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-11-29 13:13:08.920726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-11-29 13:13:08.920757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.920937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.920981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.921161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.921193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.921389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.921422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.921621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.921653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.921849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.921882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.922039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.922075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.922368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.922401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.922670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.922682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.922892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.922902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.923077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.923089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.923177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.923188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.923324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.923335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.923583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.923594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.923761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.923772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.923859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.923871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.924016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.924028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.924306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.924340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.924557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.924590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.924838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.924876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.924997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.925008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.925238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.925270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.925544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.925577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.925861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.925872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.926023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.926034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.926201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.926234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.926502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.926535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.926821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.926854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.927104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.927137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.927344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.927377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.927557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.927589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.927765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.927777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-11-29 13:13:08.927925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-11-29 13:13:08.927936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.450 [2024-11-29 13:13:08.928184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.928195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.928340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.928352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.928580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.928592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.928764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.928775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.929000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.929011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.929100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.929111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.929260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.929273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.929423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.929435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.929521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.929533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.929624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.929637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.929771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.929782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.930004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.930016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.930084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.930095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.930301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.930313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.930469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.930480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.930669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.930681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.930773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.930784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.931041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.931076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.931271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.931304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.931534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.931566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.931730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.931742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.931962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.931996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.932247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.932279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.932468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.932479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.932735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.932769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.932910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.932943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.933147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.933186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.933332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.933363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.933665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.933697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.933836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.933847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.933943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.933960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.934131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.934164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.934355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.934388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.934623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.934634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.934821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.934831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.935004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.935015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.935162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.935194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.451 [2024-11-29 13:13:08.935395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.451 [2024-11-29 13:13:08.935428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.451 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.935611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.935643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.935850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.935862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.936038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.936072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.936277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.936309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.936518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.936550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.936819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.936851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.937141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.937174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.937367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.937400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.937650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.937682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.937906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.937919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.938000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.938012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.938228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.938261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.938544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.938577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.938711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.938744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.938958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.938970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.939125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.939160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.939351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.939384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.939663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.939695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.939898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.939930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.940147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.940182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.940428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.940460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.940590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.940601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.940755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.940799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.941003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.941037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.941256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.941288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.941509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.941543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.941834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.941867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.942069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.942091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.942250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.942291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.942512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.942546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.942822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.942855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.943033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.943067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.943290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.943322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.943500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.943532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.943794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.943826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.944102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.944113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.944306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.944338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.452 [2024-11-29 13:13:08.944642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.452 [2024-11-29 13:13:08.944675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.452 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.944879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.944890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.945036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.945047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.945221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.945252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.945523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.945554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.945842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.945874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.946100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.946134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.946261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.946293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.946565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.946598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.946909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.946941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.947245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.947279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.947528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.947559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.947822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.947862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.948152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.948186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.948404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.948435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.948685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.948717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.948930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.948970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.949212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.949223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.949401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.949412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.949587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.949618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.949826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.949858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.950103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.950114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.950341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.950373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.950593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.950624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.950920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.950980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.951251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.951282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.951502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.951533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.951796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.951834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.952063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.952075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.952218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.952228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.952412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.952445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.952637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.952680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.952863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.952895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.953106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.953139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.953384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.953415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.953690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.953728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.953956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.953967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.954166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.954177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.954401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.453 [2024-11-29 13:13:08.954432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.453 qpair failed and we were unable to recover it. 00:29:09.453 [2024-11-29 13:13:08.954632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.954664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.954912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.954944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.955260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.955293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.955542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.955574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.955885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.955916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.956274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.956346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.956561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.956599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.956873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.956904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.957223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.957257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.957523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.957555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.957819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.957834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.958042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.958057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.958290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.958304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.958592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.958624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.958861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.958893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.959204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.959237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.959375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.959407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.959600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.959632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.959870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.959885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.960033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.960067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.960275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.960306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.960575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.960607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.960894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.960909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.961115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.961130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.961282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.961297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.961514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.961545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.961750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.961782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.962059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.962074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.962178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.962192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.962404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.962435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.962682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.962713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.962970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.963003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.963250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.963287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.963534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.963566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.963741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.963755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.963939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.963982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.964247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.964280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.454 [2024-11-29 13:13:08.964498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.454 [2024-11-29 13:13:08.964529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.454 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.964673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.964704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.964956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.964972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.965138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.965170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.965422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.965454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.965753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.965784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.965982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.966021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.966241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.966255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.966441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.966472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.966685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.966717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.966955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.966971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.967199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.967214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.967389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.967404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.967645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.967677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.967963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.967997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.968197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.968229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.968369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.968401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.968648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.968679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.968883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.968898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.969050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.969084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.969281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.969313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.969584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.969616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.969869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.969902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.970157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.970173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.970378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.970392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.970601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.970621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.970827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.970842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.971073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.971089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.971360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.971393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.971598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.971631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.971877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.971908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.972183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.972199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.455 [2024-11-29 13:13:08.972408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.455 [2024-11-29 13:13:08.972423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.455 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.972653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.972668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.972833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.972847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.973082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.973101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.973332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.973346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.973585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.973600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.973833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.973848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.974008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.974023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.974204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.974236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.974504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.974537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.974762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.974794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.975063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.975078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.975288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.975303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.975512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.975527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.975765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.975797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.976058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.976092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.976364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.976397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.976668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.976700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.976944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.976989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.977193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.977208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.977414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.977428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.977682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.977722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.977968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.978002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.978164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.978197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.978452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.978483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.978743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.978774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.979023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.979057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.979354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.979387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.979654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.979686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.979903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.979935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.980285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.980368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.980666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.980701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.980972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.981009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.981159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.981191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.981480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.981512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.981759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.981791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.981983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.982017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.982262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.982277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.456 [2024-11-29 13:13:08.982485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.456 [2024-11-29 13:13:08.982499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.456 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.982707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.982722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.982931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.982946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.983190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.983205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.983485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.983517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.983765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.983806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.984051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.984066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.984280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.984311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.984585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.984617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.984909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.984940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.985192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.985225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.985535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.985566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.985766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.985798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.986074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.986108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.986384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.986417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.986692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.986724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.986920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.986959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.987224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.987239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.987505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.987519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.987704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.987719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.987821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.987836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.988047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.988078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.988353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.988385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.988616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.988649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.988848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.988879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.989157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.989173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.989404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.989419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.989651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.989665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.989898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.989913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.990153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.990169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.990327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.990342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.990533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.990564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.990839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.990870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.991123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.991139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.991302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.991316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.991575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.991607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.991877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.991909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.992171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.992187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.992336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.457 [2024-11-29 13:13:08.992350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.457 qpair failed and we were unable to recover it. 00:29:09.457 [2024-11-29 13:13:08.992582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.992596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.992754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.992769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.992917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.992932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.993076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.993092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.993243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.993257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.993496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.993526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.993726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.993769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.994013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.994047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.994291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.994323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.994596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.994627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.994921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.994960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.995228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.995260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.995460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.995492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.995675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.995707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.995946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.995964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.996195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.996210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.996450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.996481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.996661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.996693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.996903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.996917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.997160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.997193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.997420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.997452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.997643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.997676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.997913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.997927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.998045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.998060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.998304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.998335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.998581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.998613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.998878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.998909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.999193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.999226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.999508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.999539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:08.999821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:08.999857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:09.000065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:09.000081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:09.000235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:09.000249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:09.000493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:09.000524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:09.000806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:09.000838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:09.001012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:09.001027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:09.001241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:09.001274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:09.001465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:09.001496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:09.001739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:09.001771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.458 [2024-11-29 13:13:09.002037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.458 [2024-11-29 13:13:09.002053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.458 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.002200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.002215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.002467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.002482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.002656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.002671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.002897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.002929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.003235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.003269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.003560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.003592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.003871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.003903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.004093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.004144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.004286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.004301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.004525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.004540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.004760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.004792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.005071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.005106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.005382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.005415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.005720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.005752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.005993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.006009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.006220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.006234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.006467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.006481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.006722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.006737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.006972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.006993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.007227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.007241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.007410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.007442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.007594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.007627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.007896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.007928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.008208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.008241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.008427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.008442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.008718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.008750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.008863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.008895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.009201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.009235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.009467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.009499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.009748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.009781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.009978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.010012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.010254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.010269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.010515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.010546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.010760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.010793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.010996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.011011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.011118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.011132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.011372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.011387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.459 [2024-11-29 13:13:09.011622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.459 [2024-11-29 13:13:09.011636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.459 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.011844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.011859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.012031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.012047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.012262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.012294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.012588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.012620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.012876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.012907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.013143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.013177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.013469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.013501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.013777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.013810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.014098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.014132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.014313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.014351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.014602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.014635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.014856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.014888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.015192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.015226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.015424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.015456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.015655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.015687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.015870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.015886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.016098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.016114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.016349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.016381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.016646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.016678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.016930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.016986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.017273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.017306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.017629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.017661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.017876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.017908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.018146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.018180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.018432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.018447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.018683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.018699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.018959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.018975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.019160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.019175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.019349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.019364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.019601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.019633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.019934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.019980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.460 [2024-11-29 13:13:09.020246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.460 [2024-11-29 13:13:09.020261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.460 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.020475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.020490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.020715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.020747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.021018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.021053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.021343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.021358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.021597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.021613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.021856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.021889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.022147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.022180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.022477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.022515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.022778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.022811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.023074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.023117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.023278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.023293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.023469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.023500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.023772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.023803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.024098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.024131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.024383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.024415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.024711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.024743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.024943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.024986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.025266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.025297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.025572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.025604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.025861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.025892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.026194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.026223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.026430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.026462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.026751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.026784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.027027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.027046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.027264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.027278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.027514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.027529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.027806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.027838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.028084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.028117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.028396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.028429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.028635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.028666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.028914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.028945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.029153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.029168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.029403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.029434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.029633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.029665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.029855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.029886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.030128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.030146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.030363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.030395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.030579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.461 [2024-11-29 13:13:09.030611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.461 qpair failed and we were unable to recover it. 00:29:09.461 [2024-11-29 13:13:09.030905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.030936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.031210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.031244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.031491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.031522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.031707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.031739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.032029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.032045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.032254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.032269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.032523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.032573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.032780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.032813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.033086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.033119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.033336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.033350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.033538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.033571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.033699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.033731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.033929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.033971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.034228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.034243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.034454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.034469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.034631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.034646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.034863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.034878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.035034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.035050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.035211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.035226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.035457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.035472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.035684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.035699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.035857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.035872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.036027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.036042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.036278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.036309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.036555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.036588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.036856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.036899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.037063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.037079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.037256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.037289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.037469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.037501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.037632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.037663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.037917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.037958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.038266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.038299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.038570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.038602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.038829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.038862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.039142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.039175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.039453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.039485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.039664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.039697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-11-29 13:13:09.039968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-11-29 13:13:09.040002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.040199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.040232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.040369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.040401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.040707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.040738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.040933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.040957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.041129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.041161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.041423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.041455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.041645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.041677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.041861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.041875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.042116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.042156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.042405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.042437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.042618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.042650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.042957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.042990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.043239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.043272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.043512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.043543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.043805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.043836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.044131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.044165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.044471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.044502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.044791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.044823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.045017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.045060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.045215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.045230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.045469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.045484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.045642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.045675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.045988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.046022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.046295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.046312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.046493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.046508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.046683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.046715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.046916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.046958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.047183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.047215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.047350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.047365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.047507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.047547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.047824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.047856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.048061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.048094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.048330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.048345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.048640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.048671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.048872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.048904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.049215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.049248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.049509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.049540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.049842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-11-29 13:13:09.049874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-11-29 13:13:09.050141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.050175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.050454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.050483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.050775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.050806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.051088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.051121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.051395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.051426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.051716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.051747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.051939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.051984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.052226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.052241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.052453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.052468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.052705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.052736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.052988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.053027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.053234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.053249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.053468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.053500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.053716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.053748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.053935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.053955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.054223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.054256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.054530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.054562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.054773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.054805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.055004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.055038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.055310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.055325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.055538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.055553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.055638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.055653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.055797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.055826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.056031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.056065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.056276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.056308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.056489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.056522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.056772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.056805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.057006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.057043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.057226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.057258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.057510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.057542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.057734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.057765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.058018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.058052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.058351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.058384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-11-29 13:13:09.058576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-11-29 13:13:09.058609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.058885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.058917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.059206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.059240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.059484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.059499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.059657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.059671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.059895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.059927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.060236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.060268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.060524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.060556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.060803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.060836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.061030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.061046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.061281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.061295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.061461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.061476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.061693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.061708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.061926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.061941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.062183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.062198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.062357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.062372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.062587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.062601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.062767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.062804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.063023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.063057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.063279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.063324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.063504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.063518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.063642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.063657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.063838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.063882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.064158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.064191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.064385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.064416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.064605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.064638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.064909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.064956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.065137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.065152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.065323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.065354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.065598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.065630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.065907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.065939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.066231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.066270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.066522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.066536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.066751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.066782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.067046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.067081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.067289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.067305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.067465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.067480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.067677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.067709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-11-29 13:13:09.067986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-11-29 13:13:09.068019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.068299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.068331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.068607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.068639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.068903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.068935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.069234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.069266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.069487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.069518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.069801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.069833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.070097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.070112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.070356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.070371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.070593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.070608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.070819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.070834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.070993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.071008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.071166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.071181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.071358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.071390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.071640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.071672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.071852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.071884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.072068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.072083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.072323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.072355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.072536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.072568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.072843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.072881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.073153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.073168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.073331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.073346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.073579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.073610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.073825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.073857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.074055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.074071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.074288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.074303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.074575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.074607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.074858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.074890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.075168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.075207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.075488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.075521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.075731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.075763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.075967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.076001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.076191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.076206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.076488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.076520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.076815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.076847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.077064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.077100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.077386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.077418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.077543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.077575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.077821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-11-29 13:13:09.077854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-11-29 13:13:09.077976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.077992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.078188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.078202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.078277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.078319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.078568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.078600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.078900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.078932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.079191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.079224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.079428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.079461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.079743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.079775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.080053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.080069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.080283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.080298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.080408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.080422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.080567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.080582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.080796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.080811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.080969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.080985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.081107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.081139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.081418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.081450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.081650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.081683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.081866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.081898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.082061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.082094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.082323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.082355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.082562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.082600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.082715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.082747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.083030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.083064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.083343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.083375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.083627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.083660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.083912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.083944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.084248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.084282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.084535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.084567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.084820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.084850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.084998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.085031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.085224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.085256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.085445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.085460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.085653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.085686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.085833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.085865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.086015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.086030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.086244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.086259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.086490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.086522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.086718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.086751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-11-29 13:13:09.087019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-11-29 13:13:09.087053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.087257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.087288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.087440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.087472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.087750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.087783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.087911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.087943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.088227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.088260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.088513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.088528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.088697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.088712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.088962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.088978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.089203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.089235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.089508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.089559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.089834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.089866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.090058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.090091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.090339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.090355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.090588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.090602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.090762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.090777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.090958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.090992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.091128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.091161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.091360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.091391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.091594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.091626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.091899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.091931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.092068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.092082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.092164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.092181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.092415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.092430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.092636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.092651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.092891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.092906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.093082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.093098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.093312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.093326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.093515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.093530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.093630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.093645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.093823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.093855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.094041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.094074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.094200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.094233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.094433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.094465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.094735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.094767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.095045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.095079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.095362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.095378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.095539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.095554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-11-29 13:13:09.095774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-11-29 13:13:09.095789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.095899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.095932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.096225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.096259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.096530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.096562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.096848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.096881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.097086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.097120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.097324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.097356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.097521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.097553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.097832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.097864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.098088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.098122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.098408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.098440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.098631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.098663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.098867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.098899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.099220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.099254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.099447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.099462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.099648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.099663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.099880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.099895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.100156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.100190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.100376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.100409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.100674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.100705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.101005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.101038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.101302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.101317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.101577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.101592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.101758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.101772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.101943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.101991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.102270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.102301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.102510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.102542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.102845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.102877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.103068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.103084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.103305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.103337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.103615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.103648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.103930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.103972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.104249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.104282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-11-29 13:13:09.104539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-11-29 13:13:09.104572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.104716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.104748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.105024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.105058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.105363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.105396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.105608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.105640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.105900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.105932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.106260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.106293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.106523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.106555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.106701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.106733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.106935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.106995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.107259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.107303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.107547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.107562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.107748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.107765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.107957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.107973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.108222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.108238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.108354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.108369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.108526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.108541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.108700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.108733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.108929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.108976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.109274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.109307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.109532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.109565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.109818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.109850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.110105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.110139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.110342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.110358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.110555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.110587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.110844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.110876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.111136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.111170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.111472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.111505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.111769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.111802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.111933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.111975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.112186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.112201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.112402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.112440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.112651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.112683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.112902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.112935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.113225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.113261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.113543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.113576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.113834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.113867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.114202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.114239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.114496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-11-29 13:13:09.114528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-11-29 13:13:09.114830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.114862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.115127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.115143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.115292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.115307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.115474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.115489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.115746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.115779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.116029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.116066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.116371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.116403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.116670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.116702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.116999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.117032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.117189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.117232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.117406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.117423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.117635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.117669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.117887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.117919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.118186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.118230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.118410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.118427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.118640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.118673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.118966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.119001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.119279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.119311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.119510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.119542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.119848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.119889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.120114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.120148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.120427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.120459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.120721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.120752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.121072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.121088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.121309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.121323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.121541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.121556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.121840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.121854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.122102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.122118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.122337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.122352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.122500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.122515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.122705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.122721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.122878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.122909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.123228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.123262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.123469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.123500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.123704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.123735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.124004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.124036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.124199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.124215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.124324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.124357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-11-29 13:13:09.124568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-11-29 13:13:09.124600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.124736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.124768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.125023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.125057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.125270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.125298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.125465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.125498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.125693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.125725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.126021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.126054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.126300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.126333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.126566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.126584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.126806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.126821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.126972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.126988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.127251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.127266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.127434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.127449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.127840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.127872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.128134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.128168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.128450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.128482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.128801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.128834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.129036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.129070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.129209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.129241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.129428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.129460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.129737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.129752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.129990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.130024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.130242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.130274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.130471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.130504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.130727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.130760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.130983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.131032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.131137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.131152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.131331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.131363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.131570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.131602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.131862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.131895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.132116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.132148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.132344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.132376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.132563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.132579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.132825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.132857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.133046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.133080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.133305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.133343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.133654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.133686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.133907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.133939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.134097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.134113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-11-29 13:13:09.134362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-11-29 13:13:09.134394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.134621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.134654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.134937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.134978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.135258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.135291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.135490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.135506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.135739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.135771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.135990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.136023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.136179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.136211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.136408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.136440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.136764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.136797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.137033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.137065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.137354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.137387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.137528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.137560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.137770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.137802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.138080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.138114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.138399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.138444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.138579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.138594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.138822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.138853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.139084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.139117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.139375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.139405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.139689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.139720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.139928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.139970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.140244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.140260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.140514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.140546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.140703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.140736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.141072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.141089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.141287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.141320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.141620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.141652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.141926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.141972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.142261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.142294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.142559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.142582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.142753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.142768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.143010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.143026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.143271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.143286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-11-29 13:13:09.143442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-11-29 13:13:09.143457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.143685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.143717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.143991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.144025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.144324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.144356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.144646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.144678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.144873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.144905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.145055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.145071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.145322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.145354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.145564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.145597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.145884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.145916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.146087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.146120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.146326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.146358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.146540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.146556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.146781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.146797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.146973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.146991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.147144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.147160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.147383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.147398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.147634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.147666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.147932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.147973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.148166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.148182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.148461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.148493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.148777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.148810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.149085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.149119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.149407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.149440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.149715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.149746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.149958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.149991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.150143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.150176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.150479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.150494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.150716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.150731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.150830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.150846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.151107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.151153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.151439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.151472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.151726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.151742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.151987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.152021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.152252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.152284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.152476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.152508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.152773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.152806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.153107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.153140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.153408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.153423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-11-29 13:13:09.153654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-11-29 13:13:09.153670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.153892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.153907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.154066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.154082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.154187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.154203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.154374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.154390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.154519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.154551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.154761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.154793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.155000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.155035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.155162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.155177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.155357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.155389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.155595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.155627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.155838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.155870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.155998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.156032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.156335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.156353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.156571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.156603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.156814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.156846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.157156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.157189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.157388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.157420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.157629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.157667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.157897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.157930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.158142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.158175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.158352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.158367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.158547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.158580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.158837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.158869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.159060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.159094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.159238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.159272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.159496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.159540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.159718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.159733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.159930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.159974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.160235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.160268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.160524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.160540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.160711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.160726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.160979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.161013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.161171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.161203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.161397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.161413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.161513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.161529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.161747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.161763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.161852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.161868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.162100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-11-29 13:13:09.162133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-11-29 13:13:09.162271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.162303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.162497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.162529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.162675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.162691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.162863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.162898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.163111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.163143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.163345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.163377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.163522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.163541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.163796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.163811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.163915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.163930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.164092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.164125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.164331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.164363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.164591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.164623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.164901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.164933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.165141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.165174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.165369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.165401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.165534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.165566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.165706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.165739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.166064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.166098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.166303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.166335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.166529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.166561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.166785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.166800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.167039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.167055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.167210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.167225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.167323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.167338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.167460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.167492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.167695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.167727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.168008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.168041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.168244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.168276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.168486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.168501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.168668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.168684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.168863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.168895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.169184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.169217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.169425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.169457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.169590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.169623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.169839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.169871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.170132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.170166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.170357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.170390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.170598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.170630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.170777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-11-29 13:13:09.170809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-11-29 13:13:09.171093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.171126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.171423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.171455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.171643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.171675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.171900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.171933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.172138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.172171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.172303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.172335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.172522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.172554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.172834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.172866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.173069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.173109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.173349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.173382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.173676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.173708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.173909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.173942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.174213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.174245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.174426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.174442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.174687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.174703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.174956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.174972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.175126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.175141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.175305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.175320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.175581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.175613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.175812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.175843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.176055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.176093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.176198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.176226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.176342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.176377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.176515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.176547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.176694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.176726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.176924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.176965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.177156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.177188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.177449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.177481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.177612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.177644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.177921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.177962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.178160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.178192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.178419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.178451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.178633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.178649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.178825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.178858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.179141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.179175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.179407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.179446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.179587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.179619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.179838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.179870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-11-29 13:13:09.180152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-11-29 13:13:09.180187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.180298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.180313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.180567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.180599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.180816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.180849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.181145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.181178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.181406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.181438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.181641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.181673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.181960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.181993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.182252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.182267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.182374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.182389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.182550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.182565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.182763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.182795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.183079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.183111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.183321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.183353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.183486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.183517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.183812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.183844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.183978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.184012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.184211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.184243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.184463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.184495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.184698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.184730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.184880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.184912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.185231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.185264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.185398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.185431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.185654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.185669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.185785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.185803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.186032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.186065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.186362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.186393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.186586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.186618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.186819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.186850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.186979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.187011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.187206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.187238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.187444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.187475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.187595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.187627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.187880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.187912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-11-29 13:13:09.188210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-11-29 13:13:09.188242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.188359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.188390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.188522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.188537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.188688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.188723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.188928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.188968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.189096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.189128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.189407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.189438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.189637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.189669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.189905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.189937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.190202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.190234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.190440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.190455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.190607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.190641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.190917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.190956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.191164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.191196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.191401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.191433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.191569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.191584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.191732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.191747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.191910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.191943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.192155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.192187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.192442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.192489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.192705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.192719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.192815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.192829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.192920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.192934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.193162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.193177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.193346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.193361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.193530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.193561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.193697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.193727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.193921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.193960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.194097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.194128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.194253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.194268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.194441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.194456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.194597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.194629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.194810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.194841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.195118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.195152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.195412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.195427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.195683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.195723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.195922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.195972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.196101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.196133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.196333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.196365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-11-29 13:13:09.196587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-11-29 13:13:09.196619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.196875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.196925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.197123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.197156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.197350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.197383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.197635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.197650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.197865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.197880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.198111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.198128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.198275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.198289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.198452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.198467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.198653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.198695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.198972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.199006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.199140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.199172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.199453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.199484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.199690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.199722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.200012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.200045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.200184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.200216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.200419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.200451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.200701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.200715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.200837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.200852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.200922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.200941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.201111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.201143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.201397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.201429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.201638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.201653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.201890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.201905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.202064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.202080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.202240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.202272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.202524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.202556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.202691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.202722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.202982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.203015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.203210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.203250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.203362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.203377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.203548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.203579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.203723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.203755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.203887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.203919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.204108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.204140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.204258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.204289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.204619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.204651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.204845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.204877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.205077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-11-29 13:13:09.205110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-11-29 13:13:09.205392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.205407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.205627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.205641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.205862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.205877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.206033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.206049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.206138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.206153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.206309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.206324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.206504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.206519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.206675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.206713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.206852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.206884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.207005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.207039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.207294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.207324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.207466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.207498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.207779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.207794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.207879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.207894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.208139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.208171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.208425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.208469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.208573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.208589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.208801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.208816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.209057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.209090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.209239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.209271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.209465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.209497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.209623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.209638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.209796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.209811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.210064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.210080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.210187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.210201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.210375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.210406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.210516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.210546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.210684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.210716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.210900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.210930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.211187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.211220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.211406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.211436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.211645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.211676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.211789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.211820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.211967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.211999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.212112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.212144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.212318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.212333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.212500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.212532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.212730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.212763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.212970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-11-29 13:13:09.213003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-11-29 13:13:09.213220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.213252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.213432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.213464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.213593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.213624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.213918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.213933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.214211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.214245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.214445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.214477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.214609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.214641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.214833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.214865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.215056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.215090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.215150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e08b20 (9): Bad file descriptor 00:29:09.482 [2024-11-29 13:13:09.215394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.215427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.215655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.215692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.215966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.216002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.216223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.216254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.216520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.216552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.216726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.216737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.216920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.216964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.217108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.217140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.217349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.217381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.217526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.217537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.217715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.217746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.217937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.217983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.218165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.218197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.218486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.218497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.218667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.218678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.218843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.218876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.219176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.219210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.219417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.219449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.219650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.219682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.219819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.219850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.220041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.220074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.220297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.220329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.220603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.220634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.220891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.220923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-11-29 13:13:09.221128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-11-29 13:13:09.221161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.221285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.221317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.221522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.221560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.221762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.221794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.222041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.222075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.222191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.222223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.222402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.222413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.222621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.222653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.222851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.222882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.223058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.223090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.223314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.223346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.223640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.223671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.223802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.223833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.224034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.224066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.224287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.224318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.224592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.224623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.224812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.224823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.225007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.225019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.225197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.225207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.225358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.225368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.225577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.225608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.225749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.225780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.226053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.226086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.226302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.226334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.226609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.226641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.226817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.226828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.226987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.226998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.227212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.227243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.227512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.227543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.227746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.227778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.227914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.227945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.228135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.228168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.228305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.228337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.228608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.228639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.228883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.228893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.229041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.229053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.229287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.229319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.229456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.229488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.229780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-11-29 13:13:09.229813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-11-29 13:13:09.230003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.230036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.230262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.230294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.230476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.230508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.230720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.230758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.230962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.230995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.231139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.231170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.231360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.231391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.231661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.231693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.231882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.231913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.232185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.232220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.232420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.232452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.232706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.232717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.232868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.232879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.233031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.233042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.233189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.233200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.233428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.233438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.233592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.233603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.233765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.233796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.234073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.234106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.234316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.234348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.234545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.234577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.234695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.234726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.234928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.234970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.235165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.235197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.235389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.235426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.235627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.235637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.235772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.235782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.235868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.235879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.236089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.236121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.236231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.236263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.236463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.236496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.236607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.236618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.236716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.236727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.236819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.236851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.237076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.237110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.237397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.237428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.237631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.237662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.237844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-11-29 13:13:09.237875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-11-29 13:13:09.238015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.238048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.238262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.238295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.238412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.238443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.238628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.238660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.238786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.238818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.239079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.239117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.239308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.239340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.239551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.239584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.239719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.239750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.239862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.239894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.240115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.240147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.240417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.240448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.240693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.240725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.240981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.241015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.241142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.241173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.241366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.241397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.241520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.241552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.241741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.241751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.241847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.241858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.242058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.242070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.242277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.242309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.242441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.242473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.242618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.242650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.242856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.242867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.242933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.242944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.243094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.243105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.243174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.243185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.243325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.243336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.243426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.243436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.243580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.243591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.243738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.243781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.244005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.244038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.244225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.244257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.244393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.244403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.244482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.244494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.244695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.244706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.244909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.244920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.245004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.245016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-11-29 13:13:09.245155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-11-29 13:13:09.245166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.245250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.245261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.245410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.245421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.245498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.245509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.245658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.245689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.245826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.245858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.246035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.246069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.246348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.246385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.246569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.246601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.246793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.246825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.247078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.247110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.247390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.247401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.247534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.247544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.247644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.247655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.247811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.247822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.247970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.248003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.248190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.248222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.248435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.248467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.248659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.248699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.248790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.248811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.248890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.248901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.248981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.248993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.249135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.249146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.249245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.249278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.249455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.249486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.249707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.249739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.249914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.249925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.250003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.250015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.250216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.250227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.250375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.250386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-11-29 13:13:09.250487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-11-29 13:13:09.250497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.250635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.250646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.250822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.250833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.250993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.251005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.251094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.251105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.251184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.251194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.251340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.251350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.251552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.251563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.251646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.251657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.251790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.251801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.251940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.251959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.252044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.252055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.252145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.252155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.252299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.252309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.252508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.252518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.252614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.252624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.252699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.252710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.252786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.252798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.252938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.252953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.253041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.253052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.253194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.253205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.253438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.253470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.253682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.768 [2024-11-29 13:13:09.253714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.768 qpair failed and we were unable to recover it. 00:29:09.768 [2024-11-29 13:13:09.253843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.253875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.254001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.254034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.254355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.254389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.254596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.254628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.254805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.254837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.255107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.255141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.255326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.255357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.255628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.255660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.255877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.255888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.256047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.256058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.256194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.256205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.256302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.256313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.256515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.256525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.256674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.256685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.256844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.256876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.257057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.257089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.257303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.257345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.257432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.257448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.257601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.257642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.257824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.257855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.258039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.258072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.258338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.258407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.258621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.258657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.258850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.258865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.259095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.259130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.259275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.259307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.259502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.259544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.259704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.259719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.259877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.259891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.259972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.259988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.260152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.260183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.260377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.260409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.260541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.260572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.260716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.260748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.260870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.260903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.261137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.261170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.261390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.261421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.261668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.769 [2024-11-29 13:13:09.261700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.769 qpair failed and we were unable to recover it. 00:29:09.769 [2024-11-29 13:13:09.261976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.261992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.262245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.262260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.262426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.262440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.262600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.262615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.262714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.262729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.262875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.262889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.263095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.263110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.263202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.263217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.263383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.263414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.263533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.263564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.263759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.263797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.263902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.263917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.264009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.264024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.264168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.264183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.264262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.264277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.264449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.264464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.264702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.264734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.265028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.265061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.265195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.265227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.265431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.265463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.265574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.265606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.265876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.265908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.266161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.266194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.266325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.266356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.266656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.266688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.266864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.266897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.267114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.267147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.267420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.267453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.267729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.267743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.267901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.267915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.268000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.268016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.268102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.268116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.268272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.268304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.268437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.268469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.268604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.268635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.268828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.268843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.269078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.269111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.269238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.269276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.269464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-29 13:13:09.269496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.770 qpair failed and we were unable to recover it. 00:29:09.770 [2024-11-29 13:13:09.269702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.269717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.269797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.269812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.269902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.269917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.270055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.270071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.270228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.270243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.270394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.270426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.270620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.270652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.270884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.270916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.271190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.271223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.271407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.271440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.271617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.271631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.271804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.271835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.272119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.272154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.272433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.272465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.272679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.272711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.272833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.272864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.273082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.273116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.273383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.273415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.273590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.273622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.273810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.273842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.274112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.274145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.274360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.274391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.274609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.274623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.274844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.274876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.275055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.275089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.275217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.275253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.275444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.275476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.275614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.275646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.275766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.275781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.275923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.275938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.276150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.276165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.276321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.276335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.276573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.276605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.276861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.276894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.277079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.277112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.277361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.277392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.277588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.277603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.277816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.277847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.771 [2024-11-29 13:13:09.278056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.771 [2024-11-29 13:13:09.278090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.771 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.278227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.278259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.278471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.278502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.278765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.278797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.278913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.278927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.279076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.279091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.279193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.279207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.279364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.279396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.279578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.279610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.279727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.279759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.279993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.280026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.280221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.280252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.280448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.280485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.280623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.280637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.280792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.280823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.281029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.281062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.281257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.281289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.281422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.281454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.281650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.281682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.281860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.281891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.282081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.282115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.282296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.282327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.282541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.282573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.282708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.282740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.282932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.282974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.283218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.283250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.283439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.283471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.283668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.283699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.284027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.284098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.284297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.284332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.284483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.284517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.284790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.284822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.285001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.285035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.772 qpair failed and we were unable to recover it. 00:29:09.772 [2024-11-29 13:13:09.285144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.772 [2024-11-29 13:13:09.285176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.285298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.285329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.285521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.285551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.285746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.285777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.286044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.286077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.286269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.286300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.286488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.286520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.286707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.286721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.286989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.287031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.287217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.287250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.287422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.287452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.287564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.287579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.287762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.287794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.287984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.288017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.288164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.288196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.288394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.288425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.288669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.288702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.288915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.288946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.289108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.289141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.289325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.289357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.289654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.289686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.289821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.289852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.290104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.290120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.290263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.290277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.290424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.290455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.290581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.290613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.290879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.290910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.291098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.291130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.291309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.291341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.291476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.291508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.291684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.291698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.291856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.291888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.292142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.292175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.292376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.292408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.292513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.292527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.292683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.292697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.292781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.292796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.292937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.292954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.773 qpair failed and we were unable to recover it. 00:29:09.773 [2024-11-29 13:13:09.293034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.773 [2024-11-29 13:13:09.293049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.293139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.293154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.293267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.293298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.293478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.293510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.293654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.293685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.293880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.293911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.294051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.294084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.294379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.294411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.294624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.294656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.294857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.294888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.295144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.295182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.295428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.295460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.295726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.295758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.295962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.295977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.296078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.296092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.296257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.296271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.296456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.296487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.296676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.296708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.296853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.296884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.297097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.297112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.297212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.297247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.297387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.297418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.297527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.297564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.297647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.297662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.297809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.297824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.298012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.298044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.298165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.298197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.298410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.298441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.298636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.298668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.298937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.298979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.299197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.299228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.299411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.299443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.299583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.299615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.299881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.299912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.300168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.300202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.300417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.300448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.300696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.300728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.300913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.300961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.301209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.774 [2024-11-29 13:13:09.301241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.774 qpair failed and we were unable to recover it. 00:29:09.774 [2024-11-29 13:13:09.301437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.301468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.301738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.301753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.301915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.301958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.302101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.302134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.302312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.302343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.302525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.302557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.302678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.302693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.302777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.302792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.302940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.302960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.303188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.303203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.303442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.303474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.303669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.303701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.303885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.303917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.304124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.304157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.304431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.304464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.304666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.304681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.304790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.304822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.305042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.305076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.305257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.305290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.305419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.305451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.305638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.305669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.305791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.305823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.305957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.305972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.306131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.306145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.306286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.306318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.306519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.306551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.306738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.306769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.306891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.306923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.307044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.307077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.307252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.307283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.307388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.307419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.307704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.307737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.307926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.307964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.308260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.308292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.308477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.308509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.308630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.308662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.308847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.308879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.309057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-29 13:13:09.309073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.775 qpair failed and we were unable to recover it. 00:29:09.775 [2024-11-29 13:13:09.309180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.309199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.309363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.309378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.309512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.309526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.309703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.309718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.309810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.309824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.309961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.309976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.310065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.310079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.310173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.310187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.310276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.310290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.310434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.310448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.310548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.310580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.310690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.310721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.310843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.310874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.311071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.311118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.311203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.311218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.311378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.311421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.311599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.311630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.311823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.311855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.311975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.312008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.312195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.312225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.312407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.312439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.312649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.312680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.312789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.312821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.313006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.313022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.313094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.313108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.313214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.313229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.313462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.313494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.313700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.313732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.313994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.314010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.314159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.314174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.314244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.314258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.314426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.314440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.314527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.314542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.314699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.314714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.314929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.314968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.315160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.315191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.315319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.315351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.315484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.315523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.776 [2024-11-29 13:13:09.315619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.776 [2024-11-29 13:13:09.315633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.776 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.315730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.315744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.315831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.315848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.315926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.315940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.316172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.316187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.316272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.316302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.316492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.316523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.316667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.316698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.316895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.316926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.317072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.317105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.317240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.317272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.317451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.317483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.317654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.317668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.317856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.317888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.318034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.318067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.318310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.318342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.318608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.318640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.318749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.318780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.318980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.319012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.319267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.319299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.319493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.319524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.319715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.319746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.319854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.319869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.320086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.320119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.320243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.320274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.320498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.320529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.320773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.320805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.320932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.320973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.321119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.321150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.321411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.321443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.321687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.321727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.321950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.321965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.322177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.322192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.777 [2024-11-29 13:13:09.322342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.777 [2024-11-29 13:13:09.322356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.777 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.322520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.322551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.322841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.322873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.323054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.323087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.323303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.323334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.323476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.323508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.323784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.323816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.324077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.324092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.324203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.324217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.324385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.324422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.324702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.324734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.324911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.324942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.325197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.325229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.325416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.325447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.325619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.325634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.325739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.325769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.325964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.325996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.326107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.326139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.326386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.326417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.326689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.326720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.326919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.326933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.327019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.327034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.327269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.327283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.327446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.327478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.327674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.327705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.327934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.327986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.328268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.328299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.328491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.328523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.328660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.328691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.328889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.328921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.329108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.329140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.329254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.329285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.329528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.329560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.329670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.329684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.329765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.329779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.329942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.329985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.330169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.330201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.330317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.330348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.330461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.778 [2024-11-29 13:13:09.330492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.778 qpair failed and we were unable to recover it. 00:29:09.778 [2024-11-29 13:13:09.330673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.330688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.330891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.330906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.331046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.331061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.331211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.331225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.331400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.331414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.331507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.331525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.331621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.331636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.331818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.331832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.332014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.332047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.332314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.332346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.332606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.332643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.332747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.332768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.332906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.332921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.333149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.333164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.333369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.333384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.333521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.333535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.333768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.333800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.334010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.334043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.334245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.334277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.334416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.334447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.334653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.334685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.334861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.334893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.335096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.335111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.335213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.335228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.335442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.335457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.335610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.335625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.335781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.335813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.336054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.336087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.336267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.336299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.336512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.336543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.336722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.336754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.336940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.336958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.337078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.337109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.337334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.337365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.337635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.337667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.337846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.337878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.338120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.338135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.338308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.338341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.779 qpair failed and we were unable to recover it. 00:29:09.779 [2024-11-29 13:13:09.338537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.779 [2024-11-29 13:13:09.338568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.338762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.338793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.338935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.338976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.339113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.339144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.339314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.339345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.339527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.339559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.339804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.339835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.340047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.340062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.340221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.340252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.340452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.340484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.340611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.340626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.340781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.340795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.341028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.341046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.341204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.341218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.341367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.341381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.341607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.341639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.341956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.341989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.342183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.342216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.342419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.342450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.342695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.342709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.342848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.342863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.343031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.343063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.343258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.343289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.343478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.343510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.343763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.343777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.343938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.343982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.344184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.344215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.344446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.344477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.344668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.344683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.344908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.344939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.345169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.345201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.345466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.345498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.345690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.345721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.345842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.345873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.346059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.346074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.346233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.346264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.346454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.346486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.346773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.780 [2024-11-29 13:13:09.346805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.780 qpair failed and we were unable to recover it. 00:29:09.780 [2024-11-29 13:13:09.346926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.346965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.347148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.347180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.347386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.347416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.347610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.347641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.347820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.347865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.347974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.347990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.348213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.348246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.348519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.348550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.348663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.348695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.348800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.348814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.348910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.348924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.349012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.349026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.349166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.349181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.349328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.349342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.349442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.349462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.349562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.349576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.349647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.349661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.349815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.349830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.349983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.350024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.350212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.350243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.350423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.350454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.350581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.350596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.350687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.350701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.350774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.350789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.350943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.350961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.351034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.351048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.351275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.351290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.351365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.351379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.351594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.351608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.351846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.351860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.352031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.352046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.352194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.352225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.352414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.352446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.352557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.352588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.352759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.352791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.352988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.353021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.781 [2024-11-29 13:13:09.353196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.781 [2024-11-29 13:13:09.353227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.781 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.353417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.353449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.353716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.353747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.353940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.353990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.354165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.354179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.354315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.354330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.354482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.354514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.354690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.354722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.354857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.354889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.355193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.355207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.355404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.355435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.355655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.355686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.355926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.355978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.356167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.356199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.356378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.356410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.356624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.356655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.356862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.356894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.357117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.357150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.357280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.357316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.357587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.357618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.357862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.357894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.358141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.358174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.358344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.358376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.358554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.358586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.358774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.358788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.359022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.359056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.359231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.359264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.359400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.359431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.359619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.359651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.359853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.359885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.360151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.360166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.360379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.360411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.360595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.360627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.360837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.360869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.361110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.361142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.361277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.782 [2024-11-29 13:13:09.361308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.782 qpair failed and we were unable to recover it. 00:29:09.782 [2024-11-29 13:13:09.361493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.361525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.361764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.361778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.361877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.361907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.362091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.362123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.362246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.362277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.362454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.362485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.362691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.362724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.362925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.362968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.363251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.363265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.363530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.363567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.363740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.363756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.363988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.364005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.364088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.364103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.364216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.364231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.364384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.364398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.364557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.364588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.364722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.364753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.365000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.365034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.365232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.365263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.365389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.365419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.365705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.365737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.365872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.365903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.366112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.366145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.366347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.366378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.366634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.366665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.366773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.366804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.366978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.366992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.367226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.367257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.367501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.367533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.367653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.367686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.367763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.367777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.368011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.368043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.368255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.368286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.368505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.368536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.368656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.368687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.368926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.368941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.369127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.369162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.369346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.369378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.783 [2024-11-29 13:13:09.369566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-29 13:13:09.369598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.783 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.369799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.369813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.369899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.369929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.370075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.370107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.370351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.370383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.370520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.370551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.370760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.370799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.371010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.371025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.371119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.371133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.371310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.371341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.371521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.371553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.371803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.371835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.372023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.372038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.372250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.372264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.372423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.372454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.372634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.372665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.372799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.372831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.373071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.373104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.373398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.373429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.373622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.373654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.373794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.373826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.374028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.374061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.374242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.374273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.374462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.374494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.374738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.374770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.375002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.375017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.375198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.375213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.375313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.375327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.375540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.375572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.375758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.375790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.375984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.376017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.376196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.376227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.376337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.376369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.376574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.376606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.376806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.376837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.376943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.376985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.377185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.377217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.377408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.377439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.377680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-29 13:13:09.377716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.784 qpair failed and we were unable to recover it. 00:29:09.784 [2024-11-29 13:13:09.377929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.377970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.378109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.378141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.378320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.378350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.378645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.378676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.378942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.378983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.379256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.379287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.379411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.379442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.379635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.379666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.379890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.379904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.379997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.380013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.380162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.380176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.380325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.380340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.380432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.380463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.380657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.380688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.380881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.380913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.381104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.381119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.381288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.381319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.381443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.381475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.381607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.381638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.381814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.381845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.382022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.382054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.382308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.382339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.382583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.382614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.382790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.382821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.383018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.383051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.383317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.383348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.383537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.383569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.383701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.383732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.383978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.383993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.384239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.384270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.384411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.384442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.384654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.384684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.384875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.384906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.385181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.385214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.385402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.385434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.385701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.385733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.385924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.385962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.386088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.386119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.785 [2024-11-29 13:13:09.386409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.785 [2024-11-29 13:13:09.386440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.785 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.386544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.386581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.386723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.386755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.386966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.386999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.387145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.387176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.387362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.387393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.387664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.387696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.387940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.387994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.388211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.388242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.388356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.388388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.388509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.388541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.388787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.388819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.389036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.389070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.389254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.389286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.389492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.389524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.389709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.389740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.389879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.389911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.390098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.390130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.390329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.390361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.390556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.390588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.390761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.390792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.390983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.391016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.391192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.391224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.391514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.391546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.391746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.391777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.391987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.392021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.392165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.392180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.392250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.392264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.392425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.392457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.392645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.392675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.392860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.392890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.393078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.393092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.393306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.393337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.393465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.786 [2024-11-29 13:13:09.393496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.786 qpair failed and we were unable to recover it. 00:29:09.786 [2024-11-29 13:13:09.393701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.393731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.393866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.393897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.394149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.394182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.394355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.394386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.394561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.394593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.394836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.394867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.395060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.395074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.395225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.395242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.395403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.395417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.395519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.395533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.395686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.395700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.395853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.395884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.396015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.396047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.396230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.396261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.396524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.396555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.396802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.396816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.396908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.396922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.397095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.397110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.397210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.397224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.397382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.397396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.397488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.397502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.397606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.397621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.397779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.397793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.398004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.398035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.398226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.398257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.398368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.398399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.398533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.398564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.398748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.398779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.399025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.399058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.399255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.399286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.399488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.399519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.399712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.399744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.399926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.399940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.400138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.400169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.400370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.400402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.400577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.400609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.400782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.400797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.400954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.787 [2024-11-29 13:13:09.400969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.787 qpair failed and we were unable to recover it. 00:29:09.787 [2024-11-29 13:13:09.401175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.401189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.401346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.401360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.401465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.401496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.401766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.401797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.402042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.402075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.402255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.402286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.402554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.402585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.402796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.402827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.403098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.403113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.403328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.403365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.403564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.403596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.403799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.403830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.404039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.404071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.404248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.404280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.404397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.404429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.404674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.404705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.404835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.404867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.405057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.405091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.405285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.405317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.405521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.405553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.405767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.405781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.405879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.405894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.406081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.406114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.406330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.406363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.406548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.406580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.406844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.406859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.406992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.407007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.407149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.407163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.407272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.407286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.407440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.407472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.407645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.407677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.407872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.407904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.408118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.408133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.408280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.408295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.408455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.408486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.408610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.408641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.408777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.408809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.408930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.408969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.788 [2024-11-29 13:13:09.409082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.788 [2024-11-29 13:13:09.409114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.788 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.409381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.409414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.409596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.409628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.409806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.409821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.410005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.410039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.410307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.410339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.410534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.410565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.410755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.410787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.410972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.410988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.411140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.411155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.411306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.411338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.411478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.411515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.411732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.411763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.411977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.412009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.412280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.412310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.412537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.412569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.412759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.412789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.412986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.413018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.413145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.413176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.413440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.413471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.413604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.413636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.413815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.413847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.414038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.414053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.414136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.414150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.414303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.414318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.414410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.414453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.414668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.414700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.414895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.414925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.415062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.415077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.415168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.415182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.415347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.415378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.415570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.415601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.415795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.415827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.415999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.416015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.416186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.416200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.416420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.416434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.416585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.416600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.416683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.416699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.416839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.789 [2024-11-29 13:13:09.416854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.789 qpair failed and we were unable to recover it. 00:29:09.789 [2024-11-29 13:13:09.416958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.416972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.417058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.417072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.417169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.417183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.417263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.417276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.417495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.417527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.417716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.417746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.417859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.417890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.418023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.418056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.418226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.418257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.418439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.418470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.418730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.418762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.418867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.418882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.419039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.419057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.419283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.419297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.419503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.419518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.419595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.419609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.419678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.419692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.419831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.419846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.420013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.420028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.420122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.420163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.420432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.420463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.420655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.420687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.420894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.420926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.421055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.421088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.421216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.421248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.421493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.421524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.421719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.421751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.421874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.421905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.422101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.422116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.422281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.422312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.422449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.422480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.422658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.422689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.422883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.422914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.423035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.423067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.423240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.423255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.423401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.423432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.423672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.423703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.423891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.423922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-11-29 13:13:09.424163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-11-29 13:13:09.424178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.424320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.424334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.424501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.424532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.424715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.424747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.424871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.424902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.425184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.425199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.425362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.425393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.425540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.425571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.425769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.425801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.425980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.426014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.426253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.426267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.426405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.426420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.426566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.426580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.426669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.426683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.426781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.426798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.426963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.426995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.427239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.427270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.427521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.427552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.427732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.427764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.427882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.427913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.428133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.428148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.428326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.428357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.428543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.428574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.428762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.428794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.428917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.428938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.429112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.429144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.429354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.429386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.429516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.429547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.429756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-11-29 13:13:09.429789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-11-29 13:13:09.430056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.430090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.430291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.430322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.430438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.430469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.430649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.430680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.430867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.430898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.431107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.431122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.431206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.431220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.431453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.431483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.431621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.431651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.431858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.431889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.432126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.432158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.432283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.432314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.432561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.432632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.432865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.432901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.433171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.433183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.433419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.433451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.433572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.433603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.433803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.433834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.434026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.434059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.434202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.434234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.434335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.434367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.434497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.434528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.434665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.434697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.434891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.434924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.435067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.435100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.435355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.435370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.435505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.435537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.435676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.435707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.435975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.436018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.436161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.436171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.436374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.436405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.436596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.436627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.436750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.436784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.436865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.436876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.437018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.437029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.437117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.437127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.437199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.437210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.437346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-11-29 13:13:09.437378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-11-29 13:13:09.437579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.437611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.437797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.437828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.437967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.437989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.438070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.438081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.438166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.438176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.438316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.438347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.438615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.438647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.438768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.438799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.439029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.439040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.439189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.439200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.439398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.439429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.439607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.439638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.439853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.439885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.440031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.440043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.440249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.440280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.440549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.440580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.440804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.440836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.441023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.441056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.441257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.441268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.441492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.441523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.441740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.441771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.441951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.441962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.442188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.442220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.442510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.442541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.442739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.442771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.442991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.443002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.443164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.443175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.443270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.443282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.443365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.443376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.443456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.443467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.443579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.443610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.443801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.443832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.444079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.444111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.444255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.444266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.444381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.444391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.444547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.444558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.444648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.444658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.444781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.444812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-11-29 13:13:09.445063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-11-29 13:13:09.445096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.445228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.445253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.445333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.445343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.445438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.445449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.445545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.445576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.445704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.445735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.445911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.445942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.446125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.446136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.446314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.446344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.446635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.446667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.446802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.446834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.447020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.447031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.447106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.447117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.447291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.447302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.447452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.447483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.447741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.447772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.447969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.448003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.448203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.448234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.448427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.448459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.448570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.448601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.448788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.448820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.448942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.448988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.449230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.449262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.449492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.449502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.449635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.449646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.449711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.449722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.449799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.449810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.450068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.450101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.450372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.450403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.450578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.450616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.450734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.450765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.451070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.451103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.451350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.451381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.451567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.451599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.451738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.451769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.451910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.451942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.452198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.452209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.452348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.452359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-11-29 13:13:09.452451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-11-29 13:13:09.452461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.452665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.452697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.452889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.452899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.453051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.453062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.453287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.453319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.453456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.453487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.453631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.453662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.453908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.453940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.454077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.454109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.454353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.454385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.454591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.454622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.454812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.454844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.455091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.455124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.455238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.455248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.455391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.455402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.455549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.455559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.455777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.455788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.455865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.455876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.455986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.456020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.456274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.456306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.456429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.456461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.456714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.456745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.456933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.456944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.457109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.457141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.457278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.457310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.457494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.457525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.457769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.457800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.457911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.457943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.458157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.458188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.458318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.458328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.458539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.458570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.458766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.458803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.458996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.459030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.459132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-11-29 13:13:09.459143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-11-29 13:13:09.459210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.459220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.459442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.459453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.459613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.459624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.459705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.459715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.459893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.459925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.460088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.460121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.460372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.460383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.460601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.460632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.460768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.460800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.461071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.461104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.461283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.461314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.461436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.461468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.461679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.461711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.461816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.461848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.461990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.462023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.462206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.462243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.462341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.462357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.462529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.462544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.462758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.462790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.463060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.463093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.463281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.463296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.463480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.463512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.463706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.463737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.463935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.463978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.464133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.464168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.464342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.464373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.464507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.464544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.464765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.464775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.464944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.464959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.465103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.465134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.465332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.465364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.465496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.465527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.465719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.465750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.465943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.465984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.466229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.466260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.466511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.466543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.466739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.466771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.466900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-11-29 13:13:09.466923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-11-29 13:13:09.467163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.467175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.467322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.467333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.467399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.467410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.467511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.467521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.467597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.467607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.467740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.467751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.468007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.468041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.468235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.468266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.468446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.468477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.468687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.468719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.468922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.468964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.469196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.469228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.469490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.469521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.469721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.469753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.469941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.469984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.470183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.470194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.470273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.470284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.470434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.470445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.470679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.470710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.470835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.470867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.471056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.471090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.471225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.471257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.471450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.471482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.471742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.471774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.471896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.471907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.472089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.472122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.472456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.472528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.472748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.472785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.473035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.473069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.473299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.473331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.473574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.473605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.473852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.473882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.474004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.474036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.474229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.474260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.474387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.474418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.474605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.474636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.474761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.474790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.474885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.474899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-11-29 13:13:09.475055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-11-29 13:13:09.475070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.475301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.475332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.475527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.475558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.475687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.475718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.475902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.475933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.476145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.476176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.476393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.476424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.476618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.476649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.476846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.476877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.477144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.477176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.477438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.477469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.477593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.477625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.477814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.477846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.478036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.478067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.478262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.478294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.478499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.478537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.478751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.478782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.479029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.479062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.479248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.479262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.479447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.479461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.479606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.479621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.479723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.479764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.479884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.479915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.480136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.480169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.480366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.480397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.480606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.480638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.480851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.480882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.481091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.481123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.481303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.481334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.481529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.481561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.481687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.481718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.481913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.481944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.482126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.482165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.482305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.482319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.482487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.482518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.482637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.482668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.482858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.482888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.483084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.483116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.483358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.483394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-11-29 13:13:09.483538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-11-29 13:13:09.483553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.483673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.483704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.483832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.483862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.483997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.484035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.484231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.484263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.484379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.484410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.484599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.484631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.484808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.484839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.484970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.485003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.485247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.485288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.485371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.485386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.485537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.485551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.485707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.485721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.485937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.485980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.486231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.486262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.486503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.486534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.486800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.486831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.486978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.487019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.487194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.487208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.487311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.487342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.487581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.487613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.487753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.487784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.488048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.488081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.488260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.488291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.488399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.488430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.488632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.488663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.488931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.488981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.489225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.489257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.489449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.489481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.489604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.489634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.489823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.489854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.490117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.490150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.490341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.490372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.490560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.490592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.490795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.490826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.491014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.491046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.491185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.491199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.491371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.491402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.491516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-11-29 13:13:09.491547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-11-29 13:13:09.491803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.491833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.492020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.492053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.492198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.492229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.492431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.492463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.492654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.492684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.492850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.492885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.493073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.493086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.493231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.493241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.493319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.493330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.493424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.493434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.493499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.493509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.493659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.493690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.493860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.493891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.494091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.494124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.494317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.494327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.494459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.494479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.494631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.494642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.494723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.494733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.494855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.494891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.495107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.495138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.495346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.495378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.495514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.495545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.495759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.495790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.495986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.496019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.496222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.496254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.496457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.496488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.496757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.496789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.497020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.497052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.497244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.497275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.497468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.497499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.497673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.497706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.497969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.498014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.498169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.498180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.498334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-11-29 13:13:09.498365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-11-29 13:13:09.498493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.498525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.498669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.498701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.498823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.498854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.499049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.499081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.499219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.499231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.499298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.499308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.499436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.499447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.499526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.499536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.499739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.499751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.499882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.499893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.500045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.500077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.500313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.500356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.500496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.500529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.500662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.500694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.500875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.500906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.501058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.501092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.501364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.501395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.501575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.501606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.501785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.501816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.502008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.502041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.502180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.502212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.502364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.502395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.502585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.502615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.502929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.502969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.503110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.503129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.503229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.503244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.503343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.503357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.503531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.503561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.503702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.503732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.503920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.503961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.504161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.504193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.504327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.504359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.504605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.504637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.504882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.504914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.505130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.505167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.505359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.505374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.505524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.505556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.505802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.505832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-11-29 13:13:09.506039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-11-29 13:13:09.506072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.506206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.506237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.506448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.506481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.506618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.506648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.506791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.506823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.506998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.507013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.507123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.507137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.507286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.507301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.507472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.507504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.507696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.507727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.508001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.508040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.508185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.508199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.508291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.508326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.508518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.508555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.508749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.508781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.508954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.508969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.509194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.509226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.509362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.509393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.509637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.509668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.509805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.509837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.509966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.509998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.510271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.510303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.510442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.510473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.510675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.510706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.510921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.510965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.511144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.511158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.511345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.511376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.511573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.511604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.511738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.511770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.511971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.512003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.512184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.512216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.512349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.512382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.512576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.512606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.512716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.512748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.512943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.512985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.513206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.513238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.513357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.513389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.513512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.513543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.513724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-11-29 13:13:09.513756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-11-29 13:13:09.513874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.513905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.514115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.514149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.514354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.514385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.514559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.514590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.514769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.514801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.515006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.515038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.515168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.515199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.515315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.515347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.515468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.515499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.515687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.515718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.515847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.515878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.516058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.516091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.516354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.516385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.516582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.516612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.516830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.516861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.517081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.517114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.517315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.517347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.517459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.517491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.517629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.517660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.517858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.517889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.518008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.518040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.518290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.518321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.518487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.518501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.518578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.518592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.518743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.518758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.518844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.518872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.518995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.519027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.519308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.519339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.519608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.519622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.519775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.519790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.519885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.519926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.520065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.520097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.520206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.520239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.520431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.520463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.520634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.520648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.520731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.520745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.520831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.520845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.520918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.520932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-11-29 13:13:09.521105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-11-29 13:13:09.521137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.521339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.521370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.521560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.521592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.521770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.521801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.521925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.521986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.522250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.522281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.522480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.522511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.522687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.522718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.522895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.522927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.523180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.523213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.523400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.523414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.523490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.523504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.523717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.523732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.523835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.523849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.524075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.524109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.524234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.524264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.524396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.524428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.524616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.524660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.524819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.524833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.524988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.525020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.525273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.525304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.525416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.525447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.525660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.525691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.525986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.526020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.526223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.526255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.526471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.526503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.526648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.526680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.526929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.526969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.527096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.527128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.527272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.527303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.527480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.527511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.527653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.527690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.527894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.527926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.528057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-11-29 13:13:09.528102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-11-29 13:13:09.528241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.528255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.528399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.528430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.528569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.528601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.528852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.528883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.529071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.529104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.529232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.529264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.529439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.529453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.529604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.529636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.529769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.529800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.529997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.530031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.530224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.530239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.530406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.530437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.530566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.530597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.530777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.530809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.531070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.531103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.531287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.531320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.531508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.531540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.531656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.531688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.531799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.531831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.532023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.532056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.532236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.532268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.532385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.532416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.532612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.532644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.532891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.532906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.533046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.533061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.533232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.533246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.533333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.533347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.533601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.533633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.533908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.533940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.534107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.534139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.534397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.534412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.534591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.534605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.534749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.534781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.534974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.535007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.535213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.535246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.535431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.535462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.535710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.535742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.535864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.535895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.536067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-11-29 13:13:09.536138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-11-29 13:13:09.536456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.536483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.536629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.536641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.536791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.536801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.537052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.537063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.537213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.537223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.537361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.537394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.537532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.537563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.537714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.537744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.537916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.537961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.538189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.538209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.538288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.538297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.538369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.538379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.538523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.538537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.538676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.538686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.538864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.538878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.539093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.539126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.539275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.539306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.539448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.539480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.539722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.539732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.539955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.539966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.540121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.540131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.540288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.540319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.540434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.540466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.540720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.540752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.540945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.540988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.541202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.541233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.541513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.541523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.541700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.541731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.541966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.541999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.542215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.542225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.542361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.542392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.542639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.542671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.542799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.542830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2152741 Killed "${NVMF_APP[@]}" "$@" 00:29:09.806 [2024-11-29 13:13:09.543041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.543074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.543320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.543353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.543473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.543484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-11-29 13:13:09.543580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.543591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:09.806 [2024-11-29 13:13:09.543772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-11-29 13:13:09.543782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.543960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.543976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:09.807 [2024-11-29 13:13:09.544184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.544199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.544277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.544291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:09.807 [2024-11-29 13:13:09.544387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.544402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.544494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.544508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.807 [2024-11-29 13:13:09.544743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.544757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.544842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.544856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.807 [2024-11-29 13:13:09.545067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.545082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.545339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.545354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.545583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.545598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.545762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.545776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.545881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.545895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.545971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.545985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.546197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.546210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.546367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.546381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.546478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.546491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.546586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.546600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.546833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.546847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.546944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.546964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.547136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.547151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.547303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.547317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.547397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.547411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.547488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.547502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.547577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.547591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.547751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.547766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.547897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.547911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.548010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.548024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.548184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.548198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.548288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.548302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.548381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.548395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.548492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.548506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.548587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.548601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.548794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.548808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.548961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.548977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.549055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.549069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.549169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.549183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-11-29 13:13:09.549391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-11-29 13:13:09.549405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.549640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.549655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.549816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.549830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.550052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.550065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.550141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.550152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.550290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.550299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.550383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.550393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.550525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.550536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.550671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.550682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.550819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.550830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.550975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.550986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.551155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.551166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.551232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.551242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.551387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.551397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.551536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.551547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.551600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.551611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2153465 00:29:09.808 [2024-11-29 13:13:09.551686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.551697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.551789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.551799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.551951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.551963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2153465 00:29:09.808 [2024-11-29 13:13:09.552025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.552036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:09.808 [2024-11-29 13:13:09.552135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.552147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.552303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.552313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2153465 ']' 00:29:09.808 [2024-11-29 13:13:09.552452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.552463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.552541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.552552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.552719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.808 [2024-11-29 13:13:09.552730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.552813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.552823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.552907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.552917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.552979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.552994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.808 [2024-11-29 13:13:09.553073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.553084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.553284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.553294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.808 [2024-11-29 13:13:09.553376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.553389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.553457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.553467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.553552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.553563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.808 [2024-11-29 13:13:09.553711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.553721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-11-29 13:13:09.553803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-11-29 13:13:09.553815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.809 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.809 [2024-11-29 13:13:09.553957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.553969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.554053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.554063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.554146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.554156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.554333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.554346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.554498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.554509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.554578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.554588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.554673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.554686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.554826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.554836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.554897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.554908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.555003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.555015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.555140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.555151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.555240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.555250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.555415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.555427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.555562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.555573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.555726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.555737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.555952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.555962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.556105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.556114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.556263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.556273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.556335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.556346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.556410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.556420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.556638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.556648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.556736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.556746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.556878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.556890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.557036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.557048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.557124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.557135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.557270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.557281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.557528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.557538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.557618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.557629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.557781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.557791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.557944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.557965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.558144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.558155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.558233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-11-29 13:13:09.558243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-11-29 13:13:09.558318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.558328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.558464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.558474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.558566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.558577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.558660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.558671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.558829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.558840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.558912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.558923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.559015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.559026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.559100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.559110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.559193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.559203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.559348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.559358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.559441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.559451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.559510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.559522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.559607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.559618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.559708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.559718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.559795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.559805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.559876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.559887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.559957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.559968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.560038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.560048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.560131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.560141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.560279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.560289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.560428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.560439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.560514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.560524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.560602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.560612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.560739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.560749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.560836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.560846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.560940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.560954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.561033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.561044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.561206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.561216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.561278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.561288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.561355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.561365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.561446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.561456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.561516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.561526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.561593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.561603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.561755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.561765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.561903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.561913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.562114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.562125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.562205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.562215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.562352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.562363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-11-29 13:13:09.562472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-11-29 13:13:09.562499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.562760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.562778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.562858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.562875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.563028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.563043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.563116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.563130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.563361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.563380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.563463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.563479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.563579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.563595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.563679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.563693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.563903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.563919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.564084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.564099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.564181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.564196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.564295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.564309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.564386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.564401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.564491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.564508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.564663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.564682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.564792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.564808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.564890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.564905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.564995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.565011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-11-29 13:13:09.565127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-11-29 13:13:09.565141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.565254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.565269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.565425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.565440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.565582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.565596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.565681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.565695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.565768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.565782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.565860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.565875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.565972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.565999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.566173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.566198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.566351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.566373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.566473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.566493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.566571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.566584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.566674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.566685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.566761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.566771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.566941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.566959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.567042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.567053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.567194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.567204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.095 [2024-11-29 13:13:09.567267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.095 [2024-11-29 13:13:09.567277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.095 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.567346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.567356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.567510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.567521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.567713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.567723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.567821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.567832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.567910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.567921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.567990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.568001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.568210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.568221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.568300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.568311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.568405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.568416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.568548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.568559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.568745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.568756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.568831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.568842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.568984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.568996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.569062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.569073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.569157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.569169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.569247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.569257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.569345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.569355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.569436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.569446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.569597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.569609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.569714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.569725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.569804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.569815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.569896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.569909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.569988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.569999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.570075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.570085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.570236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.570247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.570379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.570390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.570530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.570541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.570678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.570689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.570756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.570766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.570854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.570864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.571020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.571034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.571125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.571136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.571217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.571228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.571315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.571325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.571411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.571421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.571484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.571494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.571566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.571576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.571648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.096 [2024-11-29 13:13:09.571658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.096 qpair failed and we were unable to recover it. 00:29:10.096 [2024-11-29 13:13:09.571790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.571802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.571863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.571873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.572030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.572041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.572265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.572275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.572350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.572361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.572444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.572454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.572528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.572538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.572684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.572695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.572791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.572801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.572869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.572879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.572954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.572965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.573046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.573057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.573145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.573155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.573223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.573233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.573382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.573392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.573462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.573473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.573626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.573636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.573725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.573735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.573828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.573837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.573987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.573999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.574092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.574102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.574262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.574273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.574348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.574358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.574431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.574441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.574514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.574525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.574601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.574612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.574759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.574769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.574836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.574847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.574910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.574921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.575003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.575013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.575080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.575090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.575183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.575194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.575345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.575358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.575438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.575448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.575509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.575519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.575602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.575612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.575686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.575696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.575775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.097 [2024-11-29 13:13:09.575786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.097 qpair failed and we were unable to recover it. 00:29:10.097 [2024-11-29 13:13:09.575857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.575868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.575956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.575967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.576040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.576051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.576187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.576197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.576254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.576264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.576361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.576371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.576436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.576446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.576508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.576518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.576703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.576714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.576785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.576795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.576867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.576878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.576942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.576958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.577095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.577106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.577176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.577186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.577254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.577265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.577335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.577345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.577419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.577429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.577512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.577522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.577610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.577620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.577689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.577700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.577766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.577776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.577852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.577862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.577954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.577964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.578025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.578036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.578175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.578186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.578329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.578340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.578405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.578416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.578574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.578584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.578649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.578659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.578793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.578803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.578889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.578899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.578996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.579006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.579075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.579085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.579173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.579184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.579418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.579430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.579502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.579512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.579581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.579590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.579667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.579678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.098 qpair failed and we were unable to recover it. 00:29:10.098 [2024-11-29 13:13:09.579754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.098 [2024-11-29 13:13:09.579765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.579826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.579836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.579935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.579945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.580020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.580030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.580098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.580108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.580176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.580188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.580252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.580262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.580326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.580336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.580407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.580417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.580490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.580500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.580577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.580588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.580733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.580743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.580822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.580833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.580900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.580910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.580985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.580996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.581057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.581066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.581138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.581148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.581215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.581224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.581295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.581305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.581376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.581387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.581452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.581462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.581531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.581542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.581620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.581629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.581720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.581746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.581839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.581853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.581927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.581942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.582113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.582128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.582205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.582219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.582294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.582308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.582383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.582397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.582472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.582486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.582558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.582572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.582714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.582728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.582810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.582824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.582905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.582920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.583090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.583105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.583181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.583195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.583359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.099 [2024-11-29 13:13:09.583374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.099 qpair failed and we were unable to recover it. 00:29:10.099 [2024-11-29 13:13:09.583463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.583477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.583548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.583568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.583652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.583666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.583810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.583823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.583891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.583905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.583985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.583999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.584141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.584155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.584231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.584244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.584391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.584405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.584479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.584493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.584596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.584609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.584698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.584712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.584858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.584870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.585019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.585030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.585231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.585242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.585329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.585339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.585434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.585444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.585523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.585534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.585596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.585606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.585667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.585677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.585762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.585772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.585849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.585859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.586008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.586019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.586152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.586162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.586298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.586308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.586394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.586406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.586484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.586494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.586568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.586579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.586716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.586726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.586802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.586812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.586962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.586973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.587104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.587115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.587205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.100 [2024-11-29 13:13:09.587216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.100 qpair failed and we were unable to recover it. 00:29:10.100 [2024-11-29 13:13:09.587278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.587287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.587365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.587376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.587458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.587469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.587559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.587569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.587634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.587644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.587723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.587733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.587800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.587810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.588065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.588077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.588154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.588165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.588321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.588331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.588469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.588480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.588616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.588627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.588759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.588769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.588914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.588924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.589071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.589082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.589224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.589235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.589308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.589318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.589384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.589394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.589528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.589538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.589706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.589719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.589808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.589818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.589890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.589900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.590031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.590041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.590116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.590126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.590261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.590272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.590354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.590364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.590501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.590511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.590580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.590590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.590735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.590745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.590824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.590834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.590903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.590914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.590997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.591008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.591075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.591085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.591175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.591186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.591327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.591337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.591407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.591417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.591506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.591516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.591594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.591604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.101 qpair failed and we were unable to recover it. 00:29:10.101 [2024-11-29 13:13:09.591680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.101 [2024-11-29 13:13:09.591690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.591757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.591767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.591840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.591850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.592006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.592017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.592095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.592105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.592238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.592248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.592332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.592342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.592420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.592430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.592580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.592590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.592734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.592745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.592877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.592888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.592955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.592966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.593117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.593128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.593195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.593205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.593282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.593292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.593371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.593381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.593439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.593449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.593528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.593538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.593620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.593630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.593711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.593722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.593924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.593934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.594060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.594097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.594260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.594277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.594431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.594445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.594588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.594602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.594690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.594704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.594843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.594857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.594959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.594973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.595066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.595081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.595175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.595190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.595279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.595293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.595367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.595381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.595534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.595548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.595719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.595733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.595872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.595887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.596025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.596040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.596109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.596124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.596230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.596244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.596329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.102 [2024-11-29 13:13:09.596343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.102 qpair failed and we were unable to recover it. 00:29:10.102 [2024-11-29 13:13:09.596485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.596500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.596641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.596655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.596725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.596740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.596815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.596830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.596910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.596925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.597019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.597034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.597176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.597190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.597265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.597280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.597436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.597451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.597532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.597553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.597659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.597673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.597762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.597777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.597921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.597935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.598033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.598047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.598193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.598207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.598373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.598387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.598519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.598534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.598606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.598621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.598815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.598830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.598979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.598994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.599071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.599085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.599224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.599238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.599330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.599344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.599489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.599504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.599655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.599670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.599744] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:29:10.103 [2024-11-29 13:13:09.599785] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.103 [2024-11-29 13:13:09.599830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.599844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.599928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.599941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.600096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.600109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.600199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.600212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.600286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.600299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.600384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.600396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.600490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.600499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.600567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.600577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.600656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.600667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.600745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.600755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.600927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.600955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.601039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.601055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.103 [2024-11-29 13:13:09.601141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.103 [2024-11-29 13:13:09.601156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.103 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.601240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.601255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.601340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.601355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.601443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.601458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.601534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.601546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.601723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.601734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.601817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.601829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.601912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.601923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.601991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.602003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.602085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.602096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.602163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.602173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.602239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.602251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.602318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.602329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.602409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.602419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.602494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.602505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.602570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.602581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.602649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.602659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.602729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.602741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.602804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.602815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.602946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.602975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.603058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.603068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.603205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.603216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.603346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.603357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.603491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.603502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.603647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.603658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.603749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.603760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.603902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.603913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.603985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.603995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.604061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.604071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.604287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.604298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.604448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.604458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.604517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.604527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.604685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.604695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.604757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.104 [2024-11-29 13:13:09.604768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.104 qpair failed and we were unable to recover it. 00:29:10.104 [2024-11-29 13:13:09.604916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.604927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.605009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.605020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.605092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.605103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.605185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.605197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.605349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.605360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.605436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.605447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.605645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.605656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.605879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.605890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.605964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.605975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.606053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.606063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.606137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.606148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.606285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.606296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.606372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.606382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.606447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.606457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.606542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.606552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.606686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.606697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.606765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.606776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.606849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.606862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.607005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.607023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.607179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.607190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.607264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.607274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.607349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.607360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.607423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.607434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.607592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.607604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.607695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.607706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.607845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.607856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.607933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.607943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.608015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.608026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.608102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.608113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.608312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.608323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.608407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.608417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.608486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.608496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.608639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.608650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.608718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.608729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.608809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.608820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.608954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.608965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.609038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.609049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.609198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.105 [2024-11-29 13:13:09.609208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.105 qpair failed and we were unable to recover it. 00:29:10.105 [2024-11-29 13:13:09.609349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.609360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.609423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.609434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.609584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.609595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.609658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.609669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.609746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.609757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.609831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.609842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.609994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.610008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.610150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.610161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.610235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.610246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.610323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.610334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.610487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.610498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.610635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.610646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.610795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.610806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.610880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.610891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.611062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.611073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.611158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.611169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.611232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.611243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.611320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.611331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.611467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.611479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.611613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.611624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.611702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.611713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.611794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.611805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.611869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.611879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.611956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.611968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.612028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.612039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.612175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.612185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.612431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.612442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.612575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.612586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.612732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.612743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.612819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.612830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.612922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.612933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.613116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.613127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.613194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.613205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.613283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.613294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.613369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.613379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.613515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.613526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.613678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.613689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.613840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.613850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.106 [2024-11-29 13:13:09.613911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.106 [2024-11-29 13:13:09.613922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.106 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.614068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.614079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.614232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.614244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.614311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.614322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.614432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.614443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.614719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.614730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.614874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.614884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.614968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.614980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.615218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.615230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.615303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.615313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.615398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.615409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.615492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.615503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.615596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.615607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.615676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.615686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.615876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.615887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.615961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.615973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.616198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.616209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.616350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.616361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.616429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.616440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.616572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.616582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.616662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.616673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.616829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.616840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.616978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.616989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.617077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.617088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.617181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.617191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.617255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.617266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.617425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.617437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.617509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.617520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.617705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.617717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.617858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.617868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.618002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.618014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.618216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.618227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.618408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.618419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.618493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.618504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.618651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.618662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.618738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.618749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.618885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.618896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.619034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.619046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.107 [2024-11-29 13:13:09.619133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.107 [2024-11-29 13:13:09.619144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.107 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.619304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.619315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.619459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.619470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.619604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.619614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.619680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.619691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.619775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.619786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.619921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.619932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.620070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.620081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.620161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.620172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.620251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.620261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.620331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.620343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.620417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.620428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.620495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.620505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.620651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.620662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.620747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.620758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.620820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.620831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.620997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.621008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.621094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.621105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.621271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.621282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.621370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.621381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.621464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.621475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.621634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.621644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.621787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.621798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.621932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.621943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.622082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.622093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.622234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.622245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.622381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.622392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.622482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.622492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.622558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.622568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.622650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.622661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.622751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.622761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.622846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.622857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.622931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.622942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.623032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.623043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.623222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.623233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.623305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.623316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.623392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.623403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.623482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.623492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.623575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.108 [2024-11-29 13:13:09.623586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.108 qpair failed and we were unable to recover it. 00:29:10.108 [2024-11-29 13:13:09.623665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.623676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.623807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.623818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.623967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.623978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.624133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.624144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.624286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.624297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.624431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.624442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.624540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.624551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.624682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.624693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.624777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.624788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.624919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.624929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.625097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.625109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.625169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.625184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.625281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.625292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.625442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.625452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.625538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.625548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.625681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.625692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.625786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.625797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.625970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.625981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.626044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.626055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.626119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.626130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.626264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.626275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.626420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.626431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.626517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.626528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.626608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.626619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.626701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.626712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.626783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.626793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.626923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.626934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.627029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.627040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.627177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.627188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.627255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.627266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.627352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.627363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.627448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.627458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.627544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.627555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.109 [2024-11-29 13:13:09.627640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.109 [2024-11-29 13:13:09.627651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.109 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.627713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.627724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.627855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.627865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.628007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.628018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.628084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.628095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.628186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.628197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.628335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.628345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.628418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.628429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.628499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.628510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.628588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.628598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.628722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.628733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.628882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.628894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.628958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.628969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.629044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.629055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.629135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.629146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.629286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.629297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.629391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.629402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.629533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.629544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.629604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.629616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.629696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.629706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.629789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.629800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.629969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.629981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.630126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.630137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.630306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.630318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.630466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.630477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.630630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.630641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.630724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.630735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.630797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.630808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.630880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.630892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.631059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.631070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.631207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.631218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.631367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.631378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.631471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.631482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.631626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.631637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.631772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.631783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.631875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.110 [2024-11-29 13:13:09.631885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.110 qpair failed and we were unable to recover it. 00:29:10.110 [2024-11-29 13:13:09.632027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.632039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.632185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.632196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.632284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.632295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.632385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.632395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.632477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.632487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.632567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.632578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.632715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.632725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.632915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.632926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.633008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.633018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.633168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.633178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.633331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.633341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.633407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.633418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.633500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.633510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.633575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.633585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.633720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.633730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.633805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.633815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.633964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.633975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.634177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.634189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.634322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.634332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.634419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.634429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.634522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.634533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.634613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.634624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.634715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.634727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.634925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.634936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.635001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.635019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.635172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.635183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.635330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.635341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.635489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.635500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.635597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.635607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.635814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.635824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.635989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.636000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.636197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.636208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.636354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.636365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.636565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.636575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.636771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.636782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.636933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.636942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.637092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.637102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.637246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.111 [2024-11-29 13:13:09.637256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.111 qpair failed and we were unable to recover it. 00:29:10.111 [2024-11-29 13:13:09.637392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.637403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.637494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.637505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.637593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.637603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.637831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.637841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.637932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.637943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.638171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.638182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.638326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.638336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.638415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.638425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.638560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.638570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.638643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.638653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.638798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.638808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.638973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.638984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.639156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.639167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.639244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.639255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.639506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.639517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.639718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.639728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.639807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.639817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.639982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.639993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.640074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.640084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.640169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.640180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.640311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.640321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.640476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.640486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.640684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.640694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.640756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.640767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.640861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.640873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.641023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.641034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.641114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.641124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.641271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.641281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.641418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.641428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.641591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.641601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.641697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.641707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.641867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.641878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.642029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.642040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.642106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.642117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.642334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.642344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.642421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.642431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.642501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.642511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.112 [2024-11-29 13:13:09.642592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.112 [2024-11-29 13:13:09.642602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.112 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.642688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.642699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.642767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.642778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.642879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.642890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.642980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.642991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.643063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.643073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.643240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.643250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.643322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.643332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.643484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.643494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.643574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.643584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.643665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.643675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.643810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.643820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.643957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.643968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.644062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.644072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.644141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.644151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.644352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.644362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.644441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.644451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.644593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.644603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.644668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.644678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.644742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.644752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.644828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.644838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.644972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.644983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.645063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.645074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.645145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.645155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.645218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.645228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.645309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.645320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.645452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.645462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.645530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.645542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.645745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.645756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.645857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.645867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.646010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.646021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.646086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.646096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.113 [2024-11-29 13:13:09.646184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.113 [2024-11-29 13:13:09.646194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.113 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.646329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.646339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.646424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.646434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.646526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.646536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.646667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.646677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.646815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.646825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.646903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.646914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.647012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.647023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.647090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.647100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.647275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.647286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.647534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.647544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.647681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.647692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.647769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.647779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.647927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.647937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.648078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.648089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.648154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.648164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.648244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.648254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.648330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.648349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.648434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.648444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.648597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.648607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.648806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.648816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.648894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.648904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.649053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.649064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.649143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.649153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.649305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.649316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.649470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.649480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.649560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.649571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.649672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.649682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.649806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.649816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.649968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.649979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.650117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.650127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.650205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.650215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.650283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.650293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.650507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.650518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.650717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.650727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.650800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.650814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.650961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.650972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.651047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.114 [2024-11-29 13:13:09.651057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.114 qpair failed and we were unable to recover it. 00:29:10.114 [2024-11-29 13:13:09.651122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.651133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.651215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.651225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.651325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.651335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.651503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.651513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.651592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.651603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.651753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.651763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.651929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.651940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.652024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.652034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.652200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.652210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.652371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.652381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.652531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.652541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.652683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.652694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.652891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.652901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.652991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.653003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.653152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.653163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.653305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.653315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.653517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.653527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.653619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.653630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.653846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.653857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.653992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.654003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.654080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.654090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.654245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.654255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.654466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.654476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.654676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.654686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.654875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.654905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.655053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.655070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.655252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.655267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.655418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.655432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.655591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.655605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.655711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.655725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.655877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.655891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.655978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.655993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.656087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.656101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.656276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.656290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.656452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.656466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.656566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.656579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.656800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.656812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.656901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.656911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.115 [2024-11-29 13:13:09.657002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.115 [2024-11-29 13:13:09.657013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.115 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.657153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.657163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.657340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.657350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.657552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.657562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.657709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.657720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.657927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.657937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.658091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.658101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.658247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.658257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.658346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.658356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.658510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.658520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.658657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.658667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.658820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.658830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.658910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.658921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.659122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.659133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.659217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.659228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.659322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.659332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.659494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.659504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.659586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.659596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.659689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.659699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.659897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.659907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.659972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.659985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.660059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.660070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.660144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.660154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.660294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.660304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.660460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.660471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.660620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.660630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.660713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.660726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.660955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.660966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.661212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.661223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.661289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.661299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.661436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.661446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.661602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.661612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.661710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.661720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.661797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.661807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.661878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.661889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.661958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.661969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.662074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.662084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.662214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.662224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.116 [2024-11-29 13:13:09.662318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.116 [2024-11-29 13:13:09.662328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.116 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.662474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.662484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.662579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.662589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.662665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.662676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.662926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.662937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.663094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.663105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.663311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.663321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.663407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.663418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.663493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.663503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.663639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.663650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.663731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.663742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.663895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.663905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.663984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.663995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.664144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.664154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.664226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.664236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.664318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.664328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.664485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.664496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.664651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.664661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.664860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.664870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.664953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.664964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.665052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.665062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.665134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.665146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.665215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.665225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.665290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.665300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.665377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.665387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.665605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.665615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.665763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.665773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.665840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.665850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.665921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.665933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.666037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.666048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.666146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.666156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.666287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.666297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.666367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.666378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.666456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.666466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.666526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.666536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.666622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.666633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.666721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.666731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.666798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.666808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.117 qpair failed and we were unable to recover it. 00:29:10.117 [2024-11-29 13:13:09.666966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.117 [2024-11-29 13:13:09.666977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.667132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.667142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.667274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.667284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.667409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.667420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.667586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.667597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.667737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.667747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.667884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.667894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.667978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.667989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.668205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.668216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.668360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.668370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.668432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.668442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.668600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.668610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.668705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.668716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.668792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.668802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.668955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.668966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.669050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.669060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.669285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.669295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.669527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.669537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.669687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.669697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.669849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.669859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.670088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.670099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.670264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.670275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.670419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.670430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.670565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.670576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.670652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.670662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.670743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.670754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.670887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.670898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.671120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.671131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.671212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.671222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.671314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.671325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.671407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.671419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.671565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.671575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.118 [2024-11-29 13:13:09.671730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.118 [2024-11-29 13:13:09.671740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.118 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.671885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.671896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.672027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.672038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.672126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.672136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.672228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.672238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.672313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.672322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.672407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.672418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.672561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.672572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.672651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.672661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.672807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.672818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.672908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.672919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.673016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.673027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.673094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.673104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.673180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.673191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.673339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.673349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.673496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.673506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.673574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.673585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.673721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.673731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.673807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.673818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.673909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.673919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.673990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.674001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.674150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.674161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.674313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.674324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.674407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.674417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.674560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.674571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.674635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.674645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.674723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.674734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.674869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.674879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.675077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.675089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.675172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.675182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.675283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.675294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.675432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.675442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.675523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.675534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.675604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.675615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.675750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.675760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.675839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.675850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.675925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.675935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.676088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.676098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.119 qpair failed and we were unable to recover it. 00:29:10.119 [2024-11-29 13:13:09.676267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.119 [2024-11-29 13:13:09.676279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.676354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.676364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.676430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.676441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.676520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.676530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.676606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.676616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.676748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.676758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.676963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.676974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.677066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.677076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.677211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.677221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.677404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.677414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.677564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.677574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.677777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.677787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.677867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.677878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.678028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.678039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.678191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.678202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.678298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.678309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.678369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.678379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.678468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.678479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.678614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.678624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.678766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.678777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.678984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.678995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.679192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.679203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.679284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.679294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.679390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.679400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.679545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.679556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.679651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.679661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.679791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.679802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.679867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.679877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.679975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.679987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.680158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.680168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.680394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.680404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.680425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.120 [2024-11-29 13:13:09.680539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.680551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.680691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.680702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.680796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.680807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.680897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.680909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.681108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.681119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.681284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.681296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.681457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.681468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.120 qpair failed and we were unable to recover it. 00:29:10.120 [2024-11-29 13:13:09.681538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.120 [2024-11-29 13:13:09.681549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.681705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.681715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.681928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.681939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.682005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.682016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.682148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.682158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.682242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.682253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.682405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.682416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.682513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.682523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.682657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.682667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.682734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.682745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.682811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.682821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.682924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.682935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.683036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.683047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.683192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.683202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.683355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.683365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.683452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.683465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.683563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.683573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.683705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.683715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.683851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.683862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.683992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.684004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.684227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.684239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.684306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.684316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.684404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.684414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.684490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.684501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.684579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.684589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.684675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.684685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.684748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.684759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.684821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.684832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.684920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.684930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.685019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.685030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.685164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.685175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.685378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.685388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.685471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.685482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.685543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.685553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.685641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.685651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.685714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.685724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.685881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.685892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.686020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.686031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.686113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.121 [2024-11-29 13:13:09.686123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.121 qpair failed and we were unable to recover it. 00:29:10.121 [2024-11-29 13:13:09.686268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.686279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.686366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.686376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.686461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.686472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.686625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.686647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.686887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.686903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.687047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.687062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.687217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.687232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.687407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.687423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.687576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.687590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.687817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.687832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.687944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.687963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.688060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.688075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.688217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.688232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.688388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.688403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.688654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.688668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.688760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.688774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.688962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.688981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.689095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.689110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.689255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.689270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.689471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.689486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.689587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.689601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.689785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.689801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.689899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.689914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.690131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.690148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.690222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.690236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.690502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.690517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.690617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.690632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.690729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.690744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.690825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.690840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.690981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.690996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.691071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.691089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.691253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.691268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.691441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.691457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.691638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.691654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.122 [2024-11-29 13:13:09.691748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.122 [2024-11-29 13:13:09.691762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.122 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.691973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.691989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.692074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.692088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.692242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.692257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.692407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.692421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.692506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.692521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.692684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.692697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.692905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.692920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.693082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.693097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.693249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.693263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.693429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.693444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.693516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.693531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.693753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.693768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.693926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.693940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.694096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.694111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.694198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.694212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.694373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.694387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.694536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.694551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.694640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.694655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.694741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.694763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.694926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.694940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.695096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.695111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.695193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.695208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.695294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.695311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.695457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.695472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.695571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.695585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.695730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.695744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.695976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.695991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.696180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.696194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.696408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.696422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.696576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.696591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.696798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.696812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.696965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.696980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.697213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.697227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.697437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.697452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.123 [2024-11-29 13:13:09.697546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.123 [2024-11-29 13:13:09.697560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.123 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.697662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.697676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.697849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.697864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.698003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.698018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.698229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.698244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.698402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.698416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.698510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.698524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.698625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.698639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.698801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.698815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.698970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.698985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.699163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.699178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.699253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.699268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.699374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.699388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.699549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.699563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.699649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.699663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.699737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.699751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.699846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.699860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.699945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.699965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.700117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.700131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.700218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.700232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.700370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.700384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.700492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.700506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.700659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.700674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.700814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.700828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.700917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.700931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.701075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.701090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.701197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.701211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.701368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.701383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.701468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.701482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.701646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.701673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.701820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.701834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.702056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.702067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.702265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.702276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.702444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.702454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.702523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.702534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.702669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.702679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.702827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.702837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.702907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.702917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.124 qpair failed and we were unable to recover it. 00:29:10.124 [2024-11-29 13:13:09.703012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.124 [2024-11-29 13:13:09.703022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.703245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.703255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.703311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.703322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.703398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.703409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.703487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.703500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.703591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.703601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.703673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.703683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.703905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.703915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.704139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.704151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.704284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.704294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.704442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.704453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.704602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.704612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.704688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.704698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.704836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.704846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.704993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.705004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.705141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.705151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.705281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.705291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.705360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.705370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.705517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.705528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.705592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.705602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.705733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.705744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.705878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.705888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.705955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.705966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.706109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.706119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.706293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.706303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.706393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.706403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.706494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.706505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.706576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.706587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.706656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.706666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.706813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.706823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.706901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.706911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.707000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.707018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.707229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.707244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.707384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.707399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.707489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.707503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.707594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.125 [2024-11-29 13:13:09.707609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.125 qpair failed and we were unable to recover it. 00:29:10.125 [2024-11-29 13:13:09.707822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.707837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.707978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.707993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.708070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.708084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.708186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.708200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.708343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.708357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.708454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.708468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.708618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.708632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.708728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.708742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.708899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.708919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.709075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.709091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.709249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.709263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.709419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.709433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.709586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.709600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.709724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.709737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.709893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.709908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.710062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.710077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.710163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.710177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.710267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.710282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.710366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.710380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.710457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.710472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.710625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.710639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.710789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.710804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.710953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.710968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.711058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.711072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.711249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.711263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.711347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.711361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.711579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.711593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.711743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.711757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.711911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.711925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.712016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.712031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.712111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.712125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.712274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.712288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.712520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.712534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.712621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.712636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.126 [2024-11-29 13:13:09.712791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.126 [2024-11-29 13:13:09.712805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.126 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.712964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.712980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.713161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.713175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.713336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.713350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.713603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.713617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.713717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.713732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.713877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.713891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.714069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.714084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.714294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.714309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.714466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.714480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.714695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.714709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.714789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.714803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.714888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.714902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.715026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.715041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.715128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.715145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.715435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.715450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.715609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.715623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.715780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.715794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.716009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.716024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.716123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.716137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.716290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.716304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.716404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.716418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.716572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.716586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.716730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.716744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.716897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.716910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.717118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.717133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.717290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.717304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.717481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.717495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.717668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.717682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.717767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.717781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.717945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.717965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.718057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.718071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.127 [2024-11-29 13:13:09.718227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.127 [2024-11-29 13:13:09.718241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.127 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.718382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.718396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.718550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.718564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.718663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.718677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.718889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.718903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.718995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.719011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.719270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.719285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.719435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.719449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.719548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.719562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.719719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.719736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.719812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.719825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.720039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.720051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.720255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.720267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.720396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.720407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.720549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.720559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.720778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.720788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.720853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.720863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.720936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.720950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.721089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.721101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.721177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.721188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.721336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.721347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.721544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.721556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.721654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.721666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.721751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.721762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.721908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.721920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.722092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.722104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.722196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.722207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.722375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.722385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.722613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.722623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.722737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.128 [2024-11-29 13:13:09.722764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.128 [2024-11-29 13:13:09.722771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.128 [2024-11-29 13:13:09.722772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.722777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.128 [2024-11-29 13:13:09.722782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 [2024-11-29 13:13:09.722784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.722932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.722943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.723024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.723034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.723112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.723123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.128 qpair failed and we were unable to recover it. 00:29:10.128 [2024-11-29 13:13:09.723193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.128 [2024-11-29 13:13:09.723204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.723345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.723355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.723434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.723445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.723574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.723585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.723671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.723681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.723830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.723841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.723996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.724007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.724087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.724098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.724247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.724258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.724391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.724401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.724314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:10.129 [2024-11-29 13:13:09.724498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.724419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:10.129 [2024-11-29 13:13:09.724509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.724524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:10.129 [2024-11-29 13:13:09.724525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:10.129 [2024-11-29 13:13:09.724665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.724682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.724816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.724826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.724979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.724990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.725167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.725179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.725265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.725276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.725415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.725426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.725510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.725521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.725681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.725693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.725825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.725837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.725914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.725925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.725995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.726007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.726074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.726085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.726230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.726242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.726445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.726456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.726550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.726561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.726706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.726720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.726789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.726800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.726882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.726892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.726975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.726987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.727177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.727188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.727367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.727378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.727463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.727474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.727695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.727705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.727860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.727872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.728028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.728040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.728188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.728200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.129 qpair failed and we were unable to recover it. 00:29:10.129 [2024-11-29 13:13:09.728298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.129 [2024-11-29 13:13:09.728309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.728510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.728521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.728651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.728663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.728760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.728772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.728916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.728927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.729016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.729028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.729197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.729209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.729431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.729442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.729523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.729534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.729676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.729687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.729828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.729840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.729907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.729918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.730080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.730091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.730176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.730188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.730324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.730336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.730417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.730428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.730575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.730586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.730674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.730685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.730783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.730794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.730930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.730940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.731008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.731019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.731244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.731255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.731417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.731428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.731501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.731512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.731577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.731588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.731728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.731739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.731806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.731817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.731955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.731966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.732112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.732124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.732209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.732224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.732311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.732322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.732397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.732408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.732502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.732514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.732589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.732601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.732747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.732759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.732976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.732989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.733066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.733078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.130 [2024-11-29 13:13:09.733218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.130 [2024-11-29 13:13:09.733230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.130 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.733376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.733387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.733463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.733474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.733670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.733681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.733815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.733826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.733989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.734000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.734152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.734162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.734251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.734262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.734406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.734417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.734567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.734578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.734640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.734651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.734788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.734799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.734877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.734888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.735027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.735039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.735214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.735225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.735392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.735404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.735581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.735593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.735687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.735698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.735922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.735933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.736134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.736160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.736277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.736297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.736403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.736418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.736525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.736541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.736641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.736656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.736816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.736831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.736979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.736996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.737094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.737108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.737259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.737274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.737375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.737390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.737473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.737489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.737575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.737590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.737688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.737703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.737883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.737897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.737977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.737993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.738079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.738094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.131 qpair failed and we were unable to recover it. 00:29:10.131 [2024-11-29 13:13:09.738248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.131 [2024-11-29 13:13:09.738263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.738363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.738378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.738536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.738551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.738705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.738720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.738822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.738837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.738989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.739006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.739078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.739092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.739183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.739197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.739338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.739353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.739444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.739460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.739553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.739568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.739653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.739669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.739746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.739757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.739835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.739846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.739929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.739940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.740036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.740047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.740190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.740201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.740264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.740275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.740435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.740446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.740526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.740537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.740625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.740636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.740721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.740733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.740800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.740811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.740877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.740888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.741025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.741040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.741175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.741186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.741278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.741289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.741379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.741391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.741478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.741489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.741572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.741583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.741713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.741724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.741857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.741868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.132 [2024-11-29 13:13:09.741999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.132 [2024-11-29 13:13:09.742014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.132 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.742157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.742169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.742235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.742246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.742336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.742348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.742486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.742497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.742585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.742596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.742680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.742691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.742764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.742775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.742858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.742869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.742936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.742960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.743033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.743045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.743268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.743279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.743358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.743369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.743540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.743552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.743633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.743644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.743718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.743729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.743793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.743804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.743940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.743956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.744109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.744120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.744191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.744205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.744344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.744355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.744420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.744431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.744508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.744519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.744585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.744596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.744727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.744738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.744804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.744815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.744969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.744981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.745072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.745084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.745163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.745174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.745319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.745331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.745406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.745418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.745500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.745511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.745657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.745669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.745750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.745761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.745894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.133 [2024-11-29 13:13:09.745905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.133 qpair failed and we were unable to recover it. 00:29:10.133 [2024-11-29 13:13:09.746132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.746144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.746285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.746296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.746375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.746386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.746448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.746459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.746538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.746550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.746645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.746656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.746750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.746761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.746992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.747004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.747083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.747094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.747245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.747256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.747326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.747337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.747408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.747420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.747548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.747560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.747633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.747644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.747788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.747799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.747927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.747938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.748148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.748160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.748313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.748325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.748397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.748408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.748536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.748547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.748627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.748639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.748764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.748775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.748855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.748867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.749065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.749077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.749235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.749250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.749394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.749406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.749492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.749504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.749714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.749725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.749806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.749818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.749882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.749893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.750094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.750107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.750188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.750200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.750388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.750400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.750480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.750491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.750641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.750653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.750740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.750751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.750886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.750898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.750974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.750986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.134 qpair failed and we were unable to recover it. 00:29:10.134 [2024-11-29 13:13:09.751073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.134 [2024-11-29 13:13:09.751084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.751163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.751175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.751257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.751269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.751331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.751343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.751418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.751430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.751503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.751514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.751609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.751621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.751754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.751766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.751896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.751908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.752041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.752053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.752255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.752267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.752337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.752348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.752434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.752445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.752523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.752534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.752698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.752709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.752851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.752862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.752933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.752945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.753175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.753187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.753408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.753420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.753565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.753577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.753656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.753667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.753810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.753822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.753967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.753980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.754158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.754170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.754306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.754318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.754416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.754427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.754508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.754522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.754658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.754671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.754815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.754826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.754998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.755011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.755093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.755106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.755175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.755186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.755285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.755297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.755431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.755442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.755530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.755541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.755619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.755630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.755724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.755735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.755801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.755812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.755957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.135 [2024-11-29 13:13:09.755969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.135 qpair failed and we were unable to recover it. 00:29:10.135 [2024-11-29 13:13:09.756170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.756183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.756273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.756286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.756454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.756465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.756544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.756555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.756641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.756653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.756745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.756757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.756843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.756855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.757022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.757035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.757184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.757196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.757268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.757280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.757357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.757369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.757439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.757450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.757541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.757553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.757623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.757635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.757711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.757722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.757875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.757887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.758086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.758097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.758170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.758181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.758256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.758268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.758421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.758432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.758509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.758521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.758693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.758705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.758839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.758850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.758985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.758997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.759151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.759163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.759259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.759270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.759333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.759344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.759502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.759516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.759593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.759604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.759673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.759684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.759921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.759933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.760138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.760155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.760380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.760393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.760560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.760570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.760652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.760662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.760743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.760754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.760849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.760860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.136 [2024-11-29 13:13:09.760939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.136 [2024-11-29 13:13:09.760955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.136 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.761105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.761115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.761337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.761348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.761431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.761442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.761583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.761594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.761684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.761695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.761826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.761836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.761982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.761993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.762066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.762077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.762142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.762152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.762295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.762306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.762368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.762378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.762517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.762527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.762597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.762608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.762754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.762765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.762911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.762922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.763003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.763014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.763177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.763188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.763259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.763270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.763355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.763366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.763444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.763455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.763549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.763560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.763697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.763708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.763907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.763918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.764018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.764030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.764199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.764212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.764352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.764363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.764441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.764452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.764618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.764630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.764702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.764713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.764814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.764828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.764978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.764990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.137 qpair failed and we were unable to recover it. 00:29:10.137 [2024-11-29 13:13:09.765192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.137 [2024-11-29 13:13:09.765203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.765279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.765290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.765364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.765376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.765453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.765464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.765664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.765676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.765810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.765821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.766020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.766032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.766182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.766194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.766256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.766267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.766400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.766412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.766473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.766484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.766689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.766701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.766763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.766774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.766839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.766850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.766940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.766955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.767030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.767041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.767095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.767106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.767192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.767203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.767344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.767356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.767488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.767499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.767646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.767658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.767734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.767745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.767869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.767880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.768089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.768101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.768250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.768262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.768347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.768358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.768493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.768506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.768744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.768756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.768839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.768850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.769020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.769033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.769116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.769132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.769293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.769304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.769482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.769493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.769655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.769666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.769744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.769755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.769894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.769905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.770072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.770083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.770235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.770247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.770350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.770365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.770442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.770453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.770599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.770610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.770704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.138 [2024-11-29 13:13:09.770715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.138 qpair failed and we were unable to recover it. 00:29:10.138 [2024-11-29 13:13:09.770871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.770882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.771026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.771038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.771115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.771126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.771275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.771287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.771435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.771447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.771614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.771625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.771693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.771704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.771858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.771869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.772009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.772021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.772100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.772111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.772198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.772209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.772284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.772295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.772519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.772530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.772700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.772711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.772798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.772810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.772886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.772897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.773064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.773076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.773218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.773230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.773314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.773325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.773409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.773420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.773484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.773494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.773719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.773731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.773812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.773822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.773965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.773978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.774163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.774175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.774310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.774322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.774402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.774413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.774570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.774581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.774745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.774756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.774897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.774910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.775038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.775050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.775249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.775261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.775415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.775426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.775582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.775593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.775739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.775750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.775899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.775910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.775971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.775985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.776152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.776163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.776296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.776307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.776539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.776551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.776630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.776642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.139 [2024-11-29 13:13:09.776748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.139 [2024-11-29 13:13:09.776760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.139 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.776904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.776916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.777055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.777067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.777212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.777223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.777314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.777324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.777401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.777412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.777560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.777571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.777660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.777671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.777747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.777758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.777838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.777849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.777928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.777940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.778032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.778043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.778177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.778189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.778335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.778346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.778413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.778424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.778560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.778571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.778664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.778675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.778907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.778918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.779011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.779023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.779090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.779101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.779193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.779204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.779333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.779344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.779424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.779435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.779597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.779609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.779744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.779756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.779974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.779986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.780065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.780076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.780204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.780215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.780350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.780360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.780566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.780578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.780711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.780723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.780943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.780960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.781055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.781067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.781148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.781159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.781308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.781320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.781399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.781413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.781478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.781490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.781565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.781576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.781840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.781852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.781941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.781958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.782036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.782047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.782121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.140 [2024-11-29 13:13:09.782133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.140 qpair failed and we were unable to recover it. 00:29:10.140 [2024-11-29 13:13:09.782290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.782301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.782371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.782382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.782444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.782455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.782523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.782534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.782671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.782681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.782816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.782826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.782891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.782901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.783113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.783125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.783274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.783285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.783360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.783371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.783440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.783451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.783540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.783550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.783684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.783695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.783843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.783854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.783930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.783941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.784024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.784035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.784172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.784183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.784315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.784325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.784402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.784413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.784617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.784628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.784706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.784716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.784804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.784814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.784971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.784983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.785053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.785063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.785197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.785208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.785339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.785350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.785413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.785424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.785561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.785572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.785639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.785650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.785782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.785792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.785940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.785955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.786177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.786189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.786251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.786261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.786407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.786420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.786482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.786494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.786652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.786665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.786807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.786819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.786971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.786983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.787181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.141 [2024-11-29 13:13:09.787193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.141 qpair failed and we were unable to recover it. 00:29:10.141 [2024-11-29 13:13:09.787269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.787280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.787422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.787433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.787582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.787593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.787756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.787767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.787853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.787864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.788068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.788079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.788211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.788222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.788311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.788322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.788398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.788409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.788554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.788565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.788638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.788649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.788728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.788739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.788808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.788819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.788911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.788921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.789013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.789025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.789164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.789175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.789252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.789263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.789411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.789421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.142 [2024-11-29 13:13:09.789513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.142 [2024-11-29 13:13:09.789524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.142 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.789655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.789666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.789729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.789739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.789844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.789880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.789989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.790005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.790086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.790101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.790203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.790217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.790315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.790330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.790420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.790434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.790507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.790521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.790620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.790634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.790791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.790805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.790876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.790890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.791101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.791118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.791208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.791223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.791363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.791378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.791538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.791552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.791697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.791712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.791893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.791908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.792058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.792073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.792157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.792172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.792326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.792341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.792424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.792438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.792609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.792623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.792708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.792723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.792875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.792890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.792991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.793007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.143 qpair failed and we were unable to recover it. 00:29:10.143 [2024-11-29 13:13:09.793162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.143 [2024-11-29 13:13:09.793176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.793280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.793295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.793376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.793391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.793477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.793495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.793573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.793588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.793666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.793681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.793837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.793852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.793944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.793976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.794069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.794084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.794227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.794242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.794312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.794326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.794412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.794427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.794508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.794523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.794593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.794607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.794754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.794768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.794855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.794870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.794990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.795005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.795101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.795116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.795202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.795217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.795299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.795314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.795452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.795467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.795598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.795613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.795755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.795770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.795910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.795925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.796018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.796034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.796138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.796153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.796272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.796287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.144 [2024-11-29 13:13:09.796428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.144 [2024-11-29 13:13:09.796443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.144 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.796524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.796538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.796676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.796691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.796842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.796861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.796951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.796966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.797105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.797120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.797260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.797275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.797448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.797463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.797668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.797683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.797859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.797873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.797959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.797974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.798069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.798084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.798169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.798184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.798274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.798289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.798357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.798372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.798458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.798473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.798612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.798627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.798729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.798744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.798843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.798857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.799012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.799027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.799147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.799161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.799236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.799251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.799389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.799404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.799541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.799555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.799645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.799659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.799792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.799803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.799880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.799891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.145 qpair failed and we were unable to recover it. 00:29:10.145 [2024-11-29 13:13:09.800076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.145 [2024-11-29 13:13:09.800087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.800158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.800168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.800242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.800253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.800400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.800415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.800488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.800499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.800592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.800603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.800662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.800673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.800804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.800814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.800908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.800919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.800999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.801011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.801149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.801159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.801296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.801306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.801373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.801384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.801461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.801472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.801603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.801614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.801746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.801757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.801923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.801934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.802021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.802033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.802203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.802213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.802352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.802363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.802433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.802444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.802575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.802586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.802737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.802748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.802839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.802849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.802997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.803009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.803099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.803110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.803251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.146 [2024-11-29 13:13:09.803262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.146 qpair failed and we were unable to recover it. 00:29:10.146 [2024-11-29 13:13:09.803328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.803339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.803473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.803484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.803546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.803557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.803723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.803740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.803828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.803842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.803987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.804009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.804173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.804188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.804330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.804345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.804438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.804453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.804542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.804556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.804708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.804722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.804824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.804838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.805007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.805022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.805118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.805132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.805305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.805320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.805402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.805417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.805507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.805522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.805620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.805634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.805725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.805740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.805816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.805830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.806006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.806021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.806107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.806122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.806207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.806221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.147 [2024-11-29 13:13:09.806360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.147 [2024-11-29 13:13:09.806374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.147 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.806468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.806482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.806558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.806572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.806717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.806732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.806818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.806833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.806932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.806957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.807043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.807058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.807160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.807173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.807313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.807324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.807409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.807420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.807504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.807514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.807587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.807597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.807765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.807775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.807908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.807919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.807989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.808000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.808065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.808076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.808163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.808173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.808262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.808273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.808361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.808372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.808459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.808470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.808535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.808548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.808690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.808701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.808786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.808796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.808861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.808872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.808958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.148 [2024-11-29 13:13:09.808969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.148 qpair failed and we were unable to recover it. 00:29:10.148 [2024-11-29 13:13:09.809060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.809072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.809209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.809220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.809308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.809319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.809396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.809407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.809499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.809511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.809589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.809599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.809664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.809675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.809821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.809832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.809895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.809907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.809972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.809984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.810116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.810127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.810262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.810273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.810404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.810415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.810505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.810516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.810665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.810676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.810815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.810826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.810910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.810921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.810986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.810998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.811140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.811150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.811233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.811244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.811312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.811324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.811401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.811412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.811482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.811495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.811721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.811731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.811795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.811806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.811876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.811887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.149 [2024-11-29 13:13:09.811969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.149 [2024-11-29 13:13:09.811980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.149 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.812056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.812067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.812150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.812162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.812227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.812238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.812301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.812311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.812447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.812457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.812614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.812625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.812786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.812797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.812878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.812889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.813043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.813055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.813144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.813155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.813290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.813301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.813445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.813456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.813519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.813530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.813611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.813622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.813755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.813765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.813911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.813921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.814068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.814079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.814163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.814175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.814310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.814321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.814457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.814468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.814544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.814555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.814630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.814641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.814721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.814732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.814871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.814882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.815034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.815047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.815200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.150 [2024-11-29 13:13:09.815211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.150 qpair failed and we were unable to recover it. 00:29:10.150 [2024-11-29 13:13:09.815273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.815284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.815419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.815430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.815512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.815522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.815592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.815603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.815757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.815768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.815953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.815964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.816058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.816069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.816148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.816160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.816238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.816248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.816381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.816394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.816469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.816479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.816565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.816576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.816662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.816672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.816745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.816756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.816836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.816847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.817049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.817060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.817197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.817207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.817265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.817276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.817420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.817431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.817483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.817494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.817588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.817599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.817664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.817674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.817824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.817835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.817905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.817915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.818060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.818072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.151 [2024-11-29 13:13:09.818209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.818220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.151 qpair failed and we were unable to recover it. 00:29:10.151 [2024-11-29 13:13:09.818363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.151 [2024-11-29 13:13:09.818375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.818465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.818476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:10.152 [2024-11-29 13:13:09.818560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.818570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.818647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.818657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.818745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.818756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.818826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.818837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:10.152 [2024-11-29 13:13:09.818967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.818979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.819109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.819120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.152 [2024-11-29 13:13:09.819257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.819270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.819337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.819348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.819422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.819433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.152 [2024-11-29 13:13:09.819499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.819510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.819574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.819585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.819664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.819675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.819747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.819758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.819916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.819926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.820009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.820020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.820154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.820165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.820228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.820239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.820339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.820350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.820422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.820435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.820508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.820520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.820593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.820605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.820751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.820762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.820855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.820866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.152 qpair failed and we were unable to recover it. 00:29:10.152 [2024-11-29 13:13:09.821024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.152 [2024-11-29 13:13:09.821036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.821178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.821190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.821252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.821263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.821422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.821433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.821507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.821518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.821589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.821600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.821751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.821761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.821826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.821837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.821897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.821918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.821982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.821996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.822149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.822160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.822296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.822308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.822403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.822415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.822543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.822554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.822633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.822644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.822716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.822727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.822872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.822883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.822976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.822988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.823142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.823153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.823216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.823233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.823312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.823323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.823396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.823409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.823540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.823551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.823619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.823630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.823698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.823708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.823807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.823818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.153 [2024-11-29 13:13:09.823887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.153 [2024-11-29 13:13:09.823897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.153 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.824083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.824095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.824238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.824249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.824337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.824348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.824412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.824423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.824505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.824515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.824591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.824602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.824681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.824693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.824834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.824845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.824914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.824926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.825011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.825023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.825091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.825101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.825187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.825198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.825332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.825343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.825405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.825416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.825554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.825566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.825633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.825644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.825720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.825732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.825797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.825809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.825874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.825887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.826090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.826102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.154 qpair failed and we were unable to recover it. 00:29:10.154 [2024-11-29 13:13:09.826169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.154 [2024-11-29 13:13:09.826180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.826249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.826260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.826351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.826361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.826509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.826522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.826589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.826600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.826684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.826695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.826775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.826787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.826861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.826872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.826979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.826991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.827075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.827086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.827234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.827245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.827315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.827326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.827414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.827425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.827499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.827510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.827586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.827597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.827757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.827768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.827831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.827842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.827906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.827917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.828012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.828024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.828108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.828119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.828259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.828270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.828416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.828427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.828520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.828531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.828597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.828608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.828683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.828694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.828768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.828779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.828845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.155 [2024-11-29 13:13:09.828856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.155 qpair failed and we were unable to recover it. 00:29:10.155 [2024-11-29 13:13:09.828958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.828969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.829111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.829124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.829183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.829196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.829327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.829339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.829407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.829419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.829497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.829508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.829570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.829580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.829656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.829668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.829734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.829744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.829811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.829821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.829897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.829908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.829975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.829987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.830076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.830087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.830151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.830162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.830230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.830241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.830320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.830331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.830402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.830414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.830517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.830530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.830622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.830634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.830703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.830714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.830805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.830817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.830966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.830978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.831115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.831128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.831269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.831281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.831419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.831430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.831565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.831577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.156 [2024-11-29 13:13:09.831776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.156 [2024-11-29 13:13:09.831788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.156 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.831848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.831859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.831936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.831952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.832045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.832057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.832207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.832219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.832293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.832304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.832381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.832392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.832464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.832475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.832538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.832549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.832626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.832637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.832705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.832716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.832782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.832793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.832858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.832869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.832932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.832943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.833085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.833096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.833292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.833303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.833380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.833394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.833478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.833489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.833551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.833563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.833631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.833642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.833717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.833728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.833808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.833818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.833901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.833912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.833995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.834007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.834083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.834094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.834171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.834182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.834248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.157 [2024-11-29 13:13:09.834258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.157 qpair failed and we were unable to recover it. 00:29:10.157 [2024-11-29 13:13:09.834410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.834420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.834491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.834502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.834563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.834574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.834741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.834752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.834815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.834826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.834968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.834981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.835117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.835129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.835187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.835198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.835267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.835277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.835350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.835360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.835426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.835436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.835501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.835512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.835588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.835599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.835665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.835676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.835756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.835768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.835828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.835839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.835990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.836002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.836080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.836091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.836242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.836254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.836358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.836370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.836459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.836470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.836537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.836548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.836626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.836637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.836715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.836726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.836797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.836809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.836880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.836891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.158 [2024-11-29 13:13:09.837028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.158 [2024-11-29 13:13:09.837039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.158 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.837185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.837196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.837271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.837282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.837372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.837385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.837463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.837474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.837554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.837566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.837655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.837666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.837748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.837760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.837836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.837847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.837917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.837929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.838004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.838015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.838080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.838092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.838167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.838178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.838250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.838261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.838324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.838335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.838419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.838430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.838497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.838508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.838578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.838589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.838650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.838661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.838727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.838739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.838814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.838825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.838893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.838904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.839042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.839054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.839122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.839133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.839198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.839210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.839278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.839289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.839370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.159 [2024-11-29 13:13:09.839382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.159 qpair failed and we were unable to recover it. 00:29:10.159 [2024-11-29 13:13:09.839454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.839465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.839529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.839540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.839623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.839634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.839699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.839709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.839772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.839782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.839850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.839861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.839935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.839945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.840026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.840038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.840099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.840112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.840184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.840197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.840261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.840272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.840350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.840362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.840431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.840444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.840510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.840521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.840586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.840597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.840664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.840675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.840748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.840761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.840826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.840837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.840906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.840917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.841011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.841023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.841087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.841098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.841228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.841239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.841312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.841323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.841469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.841480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.841549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.841561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.841635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.841646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.160 [2024-11-29 13:13:09.841716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.160 [2024-11-29 13:13:09.841727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.160 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.841886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.841898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.841971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.841983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.842046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.842057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.842125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.842137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.842208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.842219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.842288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.842299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.842366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.842376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.842455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.842466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.842547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.842557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.842691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.842702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.842768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.842778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.842857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.842867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.842931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.842941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.843027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.843039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.843105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.843115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.843250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.843261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.843343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.843355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.843487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.843498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.843564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.843575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.843670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.843681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.843748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.843759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.161 qpair failed and we were unable to recover it. 00:29:10.161 [2024-11-29 13:13:09.843820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.161 [2024-11-29 13:13:09.843831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.843909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.843921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.844004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.844016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.844084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.844095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.844250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.844262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.844350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.844361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.844434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.844444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.844510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.844521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.844658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.844672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.844734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.844744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.844822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.844832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.844980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.844992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.845064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.845076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.845163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.845174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.845241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.845251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.845339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.845350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.845491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.845502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.845573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.845583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.845647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.845658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.845724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.845734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.845866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.845877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.845958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.845969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.846041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.846052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.846129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.846140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.846217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.846228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.846295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.846307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.846389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.162 [2024-11-29 13:13:09.846400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.162 qpair failed and we were unable to recover it. 00:29:10.162 [2024-11-29 13:13:09.846471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.846483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.846551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.846563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.846645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.846656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.846721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.846733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.846800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.846811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.846946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.846960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.847030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.847041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.847118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.847129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.847212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.847223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.847357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.847369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.847439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.847450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.847520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.847531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.847600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.847611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.847686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.847697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.847781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.847792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.847859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.847870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.847937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.847953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.848030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.848041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.848109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.848120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.848185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.848196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.848273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.848284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.848350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.848363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.848445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.848456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.848538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.848560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.848706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.848717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.848791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.848802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.163 qpair failed and we were unable to recover it. 00:29:10.163 [2024-11-29 13:13:09.848872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.163 [2024-11-29 13:13:09.848883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.848961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.848973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.849050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.849061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.849149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.849160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.849261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.849272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.849339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.849350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.849420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.849432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.849497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.849508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.849578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.849589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.849740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.849752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.849830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.849842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.849914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.849926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.850068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.850079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.850152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.850163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.850237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.850248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.850323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.850334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.850427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.850439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.850513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.850524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.850603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.850616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.850753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.850765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.850854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.850865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.850931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.850942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.851041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.851053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.851130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.851141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.851225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.851236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.851311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.851321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.851391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.164 [2024-11-29 13:13:09.851401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.164 qpair failed and we were unable to recover it. 00:29:10.164 [2024-11-29 13:13:09.851472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.851483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.851553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.851565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.851630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.851641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.851715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.851726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.851800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.851812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.851886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.851897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.851975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.851987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.852060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.852072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.852134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.852147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.852222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.852233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.852305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.852315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.852378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.852389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.852451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.852462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.852591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.852602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.852663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.852675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.852761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.852772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.852953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.852965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.853034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.853046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.853112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.853123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.853204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.853215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.853355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.853366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.853508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.853518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.853589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.853600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.853677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.853687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.853757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.853767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.853902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.853914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.165 qpair failed and we were unable to recover it. 00:29:10.165 [2024-11-29 13:13:09.853993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.165 [2024-11-29 13:13:09.854023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.854099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.854109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.854191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.854202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.854284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.854294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.854358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.854368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.854447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.854458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.854530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.854540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.854671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.854682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.854745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.854755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.854955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.854980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.855160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.855176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.166 [2024-11-29 13:13:09.855255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.855271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.855362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.855377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.855529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.855545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:10.166 [2024-11-29 13:13:09.855639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.855654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.855746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.855760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.855858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.855873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.855968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.855984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.166 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.856061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.856073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.856152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.856163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.856232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.856244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.166 [2024-11-29 13:13:09.856330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.856342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.856406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.856417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.856559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.856570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.856647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.166 [2024-11-29 13:13:09.856657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.166 qpair failed and we were unable to recover it. 00:29:10.166 [2024-11-29 13:13:09.856735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.856746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.856810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.856821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.856957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.856969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.857039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.857050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.857180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.857191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.857258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.857268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.857352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.857363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.857448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.857459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.857523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.857534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.857613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.857624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.857694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.857705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.857772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.857782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.857849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.857860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.857953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.857965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.858047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.858058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.858121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.858132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.858212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.858224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.858290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.858301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.858369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.858380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.858463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.858473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.858625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.858637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.167 [2024-11-29 13:13:09.858719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.167 [2024-11-29 13:13:09.858730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.167 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.858864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.858877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.858945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.858962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.859043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.859054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.859130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.859141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.859203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.859213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.859289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.859299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.859383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.859393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.859526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.859537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.859597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.859607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.859671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.859682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.859832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.859843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.859922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.859933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.859999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.860010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.860078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.860089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.860166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.860177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.860252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.860263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.860349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.860359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.860432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.860443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.860522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.860533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.860610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.860620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.860688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.860699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.860770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.860781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.860873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.860883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.860952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.860963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.861018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.861029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.861102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.168 [2024-11-29 13:13:09.861112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.168 qpair failed and we were unable to recover it. 00:29:10.168 [2024-11-29 13:13:09.861247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.861258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.861340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.861351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.861419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.861430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.861495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.861506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.861582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.861593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.861673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.861684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.861747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.861758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.861831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.861841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.861919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.861929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.861998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.862009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.862082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.862093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.862172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.862183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.862252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.862263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.862324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.862335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.862400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.862414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.862482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.862493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.862565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.862576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.862710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.862721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.862797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.862809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.862945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.862963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.863045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.863056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.863190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.863201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.863283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.863294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.863362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.863372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.863522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.863534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.863604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.863615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.169 [2024-11-29 13:13:09.863682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.169 [2024-11-29 13:13:09.863693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.169 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.863765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.863776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.863846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.863857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.863919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.863930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.864030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.864043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.864112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.864123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.864198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.864209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.864346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.864358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.864438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.864449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.864511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.864522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.864589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.864600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.864666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.864676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.864742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.864753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.864816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.864827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.864900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.864911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.864996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.865007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.865071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.865082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.865146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.865157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.865223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.865234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.865306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.865316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.865387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.865397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.865464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.865475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.865540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.865550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.865619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.865630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.865698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.865709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.865777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.865788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.865850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.170 [2024-11-29 13:13:09.865861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.170 qpair failed and we were unable to recover it. 00:29:10.170 [2024-11-29 13:13:09.865980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.865991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.866083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.866096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.866249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.866259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.866322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.866332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.866398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.866408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.866468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.866479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.866540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.866551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.866617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.866627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.866768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.866778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.866836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.866847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.866902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.866912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.866984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.866995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.867078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.867089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.867226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.867237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.867305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.867316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.867385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.867395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.867467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.867477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.867548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.867560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.867625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.867635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.867704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.867714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.867789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.867800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.867865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.867875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.867945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.867962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.868112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.868123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.868186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.868196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.868266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.868277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.171 [2024-11-29 13:13:09.868350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.171 [2024-11-29 13:13:09.868360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.171 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.868489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.868499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.868567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.868578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.868643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.868654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.868730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.868741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.868810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.868820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.868888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.868899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.868964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.868975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.869042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.869052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.869114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.869125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.869187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.869198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.869345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.869356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.869483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.869494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.869575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.869586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.869660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.869671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.869746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.869759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.869891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.869902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.869976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.869988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.870051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.870062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.870124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.870135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.870204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.870215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.870306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.870317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.870395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.870406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.870471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.870482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.870544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.870555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.870640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.870651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.870717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.172 [2024-11-29 13:13:09.870727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.172 qpair failed and we were unable to recover it. 00:29:10.172 [2024-11-29 13:13:09.870806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.870817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.870972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.870983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.871050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.871060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.871136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.871147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.871211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.871222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.871288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.871299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.871366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.871377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.871446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.871456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.871535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.871546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.871615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.871626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.871689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.871700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.871763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.871774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.871871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.871881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.871954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.871966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.872031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.872042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.872125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.872136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.872265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.872275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.872408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.872419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.872483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.872494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.872562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.872572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.872655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.872666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.872732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.872743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.872811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.872822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.872968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.872979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.873045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.873056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.873125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.873135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.173 [2024-11-29 13:13:09.873284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.173 [2024-11-29 13:13:09.873295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.173 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.873388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.873398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.873474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.873487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.873568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.873578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.873644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.873654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.873729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.873740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.873878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.873889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.873957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.873968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.874035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.874046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.874110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.874120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.874190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.874201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.874272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.874283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.874357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.874368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.874502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.874513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.874605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.874616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.874680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.874690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.874756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.874766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.874835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.874845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.874902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.874913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.874982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.874993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.875140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.875150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.875214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.875225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.174 qpair failed and we were unable to recover it. 00:29:10.174 [2024-11-29 13:13:09.875367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.174 [2024-11-29 13:13:09.875377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.875449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.875460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.875539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.875550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.875627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.875638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.875721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.875732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.875879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.875890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.875976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.875987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.876084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.876095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.876228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.876238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.876310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.876321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.876459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.876469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.876553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.876564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.876695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.876706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.876787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.876798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.876874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.876885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.877031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.877042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.877119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.877129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.877192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.877203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.877284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.877301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.877364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.877374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.877446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.877458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.877534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.877544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.877707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.877718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.877781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.877792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.877869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.877880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.877969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.877980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.175 [2024-11-29 13:13:09.878042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.175 [2024-11-29 13:13:09.878053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.175 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.878197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.878208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.878279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.878290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.878425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.878436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.878500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.878511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.878579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.878590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.878673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.878684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.878764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.878775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.878858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.878869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.879018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.879029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.879097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.879108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.879170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.879181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.879322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.879333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.879464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.879475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.879551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.879562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.879629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.879640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.879706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.879717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.879790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.879800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.879889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.879900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.879975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.879986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.880120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.880131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.880215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.880226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.880293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.880304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.880382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.880392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.880469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.880479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.880547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.880558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.880734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.176 [2024-11-29 13:13:09.880745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.176 qpair failed and we were unable to recover it. 00:29:10.176 [2024-11-29 13:13:09.880812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.880824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.880894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.880905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.880984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.880996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.881137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.881148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.881216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.881228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.881297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.881308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.881373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.881384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.881448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.881463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.881542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.881554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.881643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.881655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.881798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.881810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.881883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.881895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.881979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.881991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.882135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.882147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.882219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.882230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.882322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.882335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.882466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.882478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.882624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.882636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.882792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.882803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.882870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.882881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.882964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.882976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.883055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.883068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.883199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.883211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.883289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.883300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.883367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.883378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.883441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.883452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.177 [2024-11-29 13:13:09.883514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.177 [2024-11-29 13:13:09.883525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.177 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.883600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.883611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.883683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.883694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.883775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.883786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.883862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.883874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.884011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.884022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.884088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.884098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.884167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.884178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.884283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.884319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8380000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.884423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.884446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.884645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.884661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.884748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.884763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.884840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.884858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.884957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.884978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.885059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.885072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.885226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.885237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.885302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.885313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.885457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.885468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.885549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.885560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.885710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.885720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.885862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.885872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.885926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.885936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.886021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.886032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.886109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.886120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.886193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.886204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.886300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.886311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.886377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.178 [2024-11-29 13:13:09.886388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.178 qpair failed and we were unable to recover it. 00:29:10.178 [2024-11-29 13:13:09.886458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.886469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.886551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.886562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.886628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.886639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.886726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.886737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.886814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.886825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.886888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.886898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.886974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.886985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.887052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.887063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.887141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.887152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.887224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.887234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.887299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.887310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.887448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.887459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.887537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.887548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.887614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.887625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.887758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.887768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.887833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.887844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.887908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.887919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.887997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.888009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.888072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.888083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.888154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.888164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.888226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.888237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.888371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.888384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.888477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.888489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.888555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.888565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.888644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.888655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.888786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.888797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 [2024-11-29 13:13:09.888871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.179 [2024-11-29 13:13:09.888881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.179 qpair failed and we were unable to recover it. 00:29:10.179 Malloc0 00:29:10.180 [2024-11-29 13:13:09.888958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.180 [2024-11-29 13:13:09.888969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.180 qpair failed and we were unable to recover it. 00:29:10.180 [2024-11-29 13:13:09.889041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.180 [2024-11-29 13:13:09.889051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.180 qpair failed and we were unable to recover it. 00:29:10.180 [2024-11-29 13:13:09.889185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.180 [2024-11-29 13:13:09.889195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.180 qpair failed and we were unable to recover it. 00:29:10.180 [2024-11-29 13:13:09.889262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.180 [2024-11-29 13:13:09.889272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.180 qpair failed and we were unable to recover it. 00:29:10.180 [2024-11-29 13:13:09.889347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.180 [2024-11-29 13:13:09.889357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.180 qpair failed and we were unable to recover it. 00:29:10.180 [2024-11-29 13:13:09.889516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.180 [2024-11-29 13:13:09.889526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.180 qpair failed and we were unable to recover it. 00:29:10.180 [2024-11-29 13:13:09.889597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.180 [2024-11-29 13:13:09.889607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.180 qpair failed and we were unable to recover it. 00:29:10.180 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.180 [2024-11-29 13:13:09.889673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.180 [2024-11-29 13:13:09.889685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.180 qpair failed and we were unable to recover it. 00:29:10.180 [2024-11-29 13:13:09.889771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.180 [2024-11-29 13:13:09.889781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.180 qpair failed and we were unable to recover it. 00:29:10.180 [2024-11-29 13:13:09.889929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.445 [2024-11-29 13:13:09.889940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.445 qpair failed and we were unable to recover it. 00:29:10.445 [2024-11-29 13:13:09.890009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.445 [2024-11-29 13:13:09.890020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.445 qpair failed and we were unable to recover it. 00:29:10.445 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:10.445 [2024-11-29 13:13:09.890099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.445 [2024-11-29 13:13:09.890109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.445 qpair failed and we were unable to recover it. 00:29:10.445 [2024-11-29 13:13:09.890187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.445 [2024-11-29 13:13:09.890197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.445 qpair failed and we were unable to recover it. 00:29:10.445 [2024-11-29 13:13:09.890284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.445 [2024-11-29 13:13:09.890294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.445 qpair failed and we were unable to recover it. 00:29:10.445 [2024-11-29 13:13:09.890357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.445 [2024-11-29 13:13:09.890368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.445 qpair failed and we were unable to recover it. 00:29:10.445 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.445 [2024-11-29 13:13:09.890449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.445 [2024-11-29 13:13:09.890460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.445 qpair failed and we were unable to recover it. 00:29:10.445 [2024-11-29 13:13:09.890521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.445 [2024-11-29 13:13:09.890531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.445 qpair failed and we were unable to recover it. 00:29:10.445 [2024-11-29 13:13:09.890614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.445 [2024-11-29 13:13:09.890625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.445 qpair failed and we were unable to recover it. 00:29:10.445 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.445 [2024-11-29 13:13:09.890693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.445 [2024-11-29 13:13:09.890705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.890798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.890825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.890910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.890926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.891030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.891045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.891130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.891145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.891220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.891234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.891330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.891344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.891419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.891434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.891509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.891523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.891604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.891618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.891786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.891800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.891887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.891901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.892046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.892061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.892136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.892151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.892231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.892250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.892336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.892351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.892424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.892438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.892524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.892538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.892643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.892658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.892742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.892756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.892835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.892850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.892929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.892942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.893024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.893039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.893114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.893128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.893219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.893232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.893316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.893331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.893405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.893420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.893523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.893537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.893681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.893696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.893869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.893883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.894075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.894089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.894234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.894249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.894466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.894481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.894575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.894589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.894748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.894763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.894848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.894862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.894940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.894975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.895069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.446 [2024-11-29 13:13:09.895084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.446 qpair failed and we were unable to recover it. 00:29:10.446 [2024-11-29 13:13:09.895243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.895259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.895428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.895442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.895527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.895541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.895704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.895732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.895825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.895841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.895922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.895937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.896101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.896116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.896199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.896214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.896291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.896305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.896446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.896460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.896526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.447 [2024-11-29 13:13:09.896568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.896582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.896760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.896773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.896952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.896967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.897067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.897081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.897278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.897292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.897435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.897449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.897602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.897620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.897701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.897715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.897788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.897802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.897893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.897907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.898066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.898081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.898300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.898314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.898398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.898412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.898554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.898568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.898658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.898672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.898813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.898827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.898920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.898934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.899018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.899035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.899153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.899168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.899236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.899250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.899328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.899342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.899421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.899435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.899516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.899530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.899674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.899688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.899791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.899805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.899967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.899983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.900069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.900083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.900239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.447 [2024-11-29 13:13:09.900254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.447 qpair failed and we were unable to recover it. 00:29:10.447 [2024-11-29 13:13:09.900393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.900407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.900551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.900565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.900644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.900658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f838c000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.900747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.900759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.900826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.900836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.900975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.900988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.901054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.901064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.901132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.901142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.901208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.901218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.901305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.901315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.901378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.901388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.901469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.901479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.901557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.901566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.901628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.901638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.901707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.901717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.901789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.901799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.901869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.901879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.902014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.902024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.902106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.902116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.902185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.902195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.902347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.902357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.902486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.902497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.902628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.902639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.902716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.902726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.902810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.902820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.902899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.902909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.903001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.903012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.903085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.903095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.903257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.903267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.903348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.903358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.903424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.903434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.903565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.903575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.903650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.903660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.903731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.903742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.903875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.903885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.903969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.903979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.904117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.904127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.904258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.448 [2024-11-29 13:13:09.904268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.448 qpair failed and we were unable to recover it. 00:29:10.448 [2024-11-29 13:13:09.904335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.904345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.904487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.904497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.904638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.904649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.904725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.904735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.904802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.904812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.904898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.904908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.905056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.905068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.449 [2024-11-29 13:13:09.905203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.905214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.905311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.905321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.905389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.905399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.905480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.449 [2024-11-29 13:13:09.905491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.905556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.905566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.905765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.905776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.449 [2024-11-29 13:13:09.905853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.905864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.905951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.905963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.906027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.906037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.449 [2024-11-29 13:13:09.906105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.906116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.906190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.906201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.906280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.906291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.906356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.906367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.906440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.906450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.906525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.906536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.906604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.906614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.906748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.906759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.906840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.906850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.906988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.906999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.907089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.907099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.907241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.907252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.907314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.907324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.907399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.907409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.907490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.907500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.907696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.907706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.907777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.907793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.907882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.907896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.907981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.907996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.908070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.908084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.449 [2024-11-29 13:13:09.908176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.449 [2024-11-29 13:13:09.908190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.449 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.908349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.908364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.908509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.908523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.908678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.908692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.908777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.908791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.908876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.908890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.909039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.909053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.909138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.909152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.909359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.909373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.909515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.909530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.909620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.909634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.909716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.909730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.909820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.909834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.909911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.909925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.910013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.910028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.910111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.910125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.910305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.910319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.910408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.910422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.910514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.910529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.910621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.910635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.910775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.910789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.910933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.910957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.911118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.911133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.911228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.911239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.911387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.911397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.911471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.911482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.911634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.911644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.911725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.911735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.911811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.911821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.911959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.911969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.912050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.912061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.450 [2024-11-29 13:13:09.912145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.450 [2024-11-29 13:13:09.912155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.450 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.912239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.912249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.912390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.912400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.912465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.912475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.912548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.912558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.912658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.912669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.912773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.912783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.912861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.912871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.912983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.912994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.913074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.913084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.913220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.913231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.451 [2024-11-29 13:13:09.913370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.913380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.913462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.913472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.913558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.913568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:10.451 [2024-11-29 13:13:09.913726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.913737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.913818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.913828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.913891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.913903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.451 [2024-11-29 13:13:09.913991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.914004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.914075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.914085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.914145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.914155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.451 [2024-11-29 13:13:09.914309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.914320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.914395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.914405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.914491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.914501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.914565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.914575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.914721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.914732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.914867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.914878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.914972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.914983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.915100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.915110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.915172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.915182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.915255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.915265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.915322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.915334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.915413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.915423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.915493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.915503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.915583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.915593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.915666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.915677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.915746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.915755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.915822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.451 [2024-11-29 13:13:09.915832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.451 qpair failed and we were unable to recover it. 00:29:10.451 [2024-11-29 13:13:09.915908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.915918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.916019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.916029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.916102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.916112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.916184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.916194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.916259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.916269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.916337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.916347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.916480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.916491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.916649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.916659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.916724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.916734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.916864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.916874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.916955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.916965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.917040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.917050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.917125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.917135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.917208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.917218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.917360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.917370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.917509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.917519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.917596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.917607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.917688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.917698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.917764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.917774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.917846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.917856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.917941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.917963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.918039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.918050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.918128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.918138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.918210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.918220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.918284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.918294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.918356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.918366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.918431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.918441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.918511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.918522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.918667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.918677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.918757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.918767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.918829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.918839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.918907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.918917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.918990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.919001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.919070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.919082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.919223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.919233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.919311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.919321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.452 [2024-11-29 13:13:09.919390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.452 [2024-11-29 13:13:09.919400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.452 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.919467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.919476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.919537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.919547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.919611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.919621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.919687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.919697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.919767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.919777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.919853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.919863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.919944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.919959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.920026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.920036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.920101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.920111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.920260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.920271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.920336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.920346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.920490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.920500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.920561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.920571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.920633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.920643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.920714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.920724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.920800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.920811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.920885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.920895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.920970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.920981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.921063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.921073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.921222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.921233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.453 [2024-11-29 13:13:09.921315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.921326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.921399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.921409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.921472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.921481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.921551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.921561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.921650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.921660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.453 [2024-11-29 13:13:09.921732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.921742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.921809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.921819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.921972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.921983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.453 [2024-11-29 13:13:09.922065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.922075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.922140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.922150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.922213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.922223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.922300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.922311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8384000b90 with addr=10.0.0.2, port=4420 00:29:10.453 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.922383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.922400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.922489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.922504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.922575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.922592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.922692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.453 [2024-11-29 13:13:09.922706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.453 qpair failed and we were unable to recover it. 00:29:10.453 [2024-11-29 13:13:09.922785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.922799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.922943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.922963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.923036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.923050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.923132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.923152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.923244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.923258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.923332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.923347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.923444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.923459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.923533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.923547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.923625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.923639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.923808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.923822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.923980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.923995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.924072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.924086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.924166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.924180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.924269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.924282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.924358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.924371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.924457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.924471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.924545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.924559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.924634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.454 [2024-11-29 13:13:09.924648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfabe0 with addr=10.0.0.2, port=4420 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.924749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.454 [2024-11-29 13:13:09.927188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.454 [2024-11-29 13:13:09.927272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.454 [2024-11-29 13:13:09.927295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.454 [2024-11-29 13:13:09.927306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.454 [2024-11-29 13:13:09.927317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.454 [2024-11-29 13:13:09.927344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.454 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:10.454 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.454 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:10.454 [2024-11-29 13:13:09.937083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.454 [2024-11-29 13:13:09.937169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.454 [2024-11-29 13:13:09.937189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.454 [2024-11-29 13:13:09.937200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.454 [2024-11-29 13:13:09.937208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.454 [2024-11-29 13:13:09.937235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.454 13:13:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2152796 00:29:10.454 [2024-11-29 13:13:09.947094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.454 [2024-11-29 13:13:09.947160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.454 [2024-11-29 13:13:09.947175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.454 [2024-11-29 13:13:09.947182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.454 [2024-11-29 13:13:09.947188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.454 [2024-11-29 13:13:09.947204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.957090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.454 [2024-11-29 13:13:09.957157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.454 [2024-11-29 13:13:09.957173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.454 [2024-11-29 13:13:09.957180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.454 [2024-11-29 13:13:09.957187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.454 [2024-11-29 13:13:09.957202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.454 qpair failed and we were unable to recover it. 00:29:10.454 [2024-11-29 13:13:09.967038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.454 [2024-11-29 13:13:09.967128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.454 [2024-11-29 13:13:09.967146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.454 [2024-11-29 13:13:09.967153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:09.967160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:09.967175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.455 qpair failed and we were unable to recover it. 00:29:10.455 [2024-11-29 13:13:09.977094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.455 [2024-11-29 13:13:09.977155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.455 [2024-11-29 13:13:09.977170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.455 [2024-11-29 13:13:09.977177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:09.977183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:09.977201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.455 qpair failed and we were unable to recover it. 00:29:10.455 [2024-11-29 13:13:09.987052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.455 [2024-11-29 13:13:09.987110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.455 [2024-11-29 13:13:09.987125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.455 [2024-11-29 13:13:09.987132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:09.987138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:09.987152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.455 qpair failed and we were unable to recover it. 00:29:10.455 [2024-11-29 13:13:09.997156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.455 [2024-11-29 13:13:09.997213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.455 [2024-11-29 13:13:09.997229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.455 [2024-11-29 13:13:09.997236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:09.997242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:09.997256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.455 qpair failed and we were unable to recover it. 00:29:10.455 [2024-11-29 13:13:10.007240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.455 [2024-11-29 13:13:10.007321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.455 [2024-11-29 13:13:10.007339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.455 [2024-11-29 13:13:10.007346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:10.007353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:10.007370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.455 qpair failed and we were unable to recover it. 00:29:10.455 [2024-11-29 13:13:10.017315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.455 [2024-11-29 13:13:10.017372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.455 [2024-11-29 13:13:10.017389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.455 [2024-11-29 13:13:10.017396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:10.017403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:10.017420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.455 qpair failed and we were unable to recover it. 00:29:10.455 [2024-11-29 13:13:10.027191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.455 [2024-11-29 13:13:10.027283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.455 [2024-11-29 13:13:10.027299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.455 [2024-11-29 13:13:10.027307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:10.027313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:10.027329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.455 qpair failed and we were unable to recover it. 00:29:10.455 [2024-11-29 13:13:10.037219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.455 [2024-11-29 13:13:10.037281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.455 [2024-11-29 13:13:10.037296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.455 [2024-11-29 13:13:10.037303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:10.037309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:10.037325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.455 qpair failed and we were unable to recover it. 00:29:10.455 [2024-11-29 13:13:10.047328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.455 [2024-11-29 13:13:10.047390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.455 [2024-11-29 13:13:10.047407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.455 [2024-11-29 13:13:10.047414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:10.047421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:10.047438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.455 qpair failed and we were unable to recover it. 00:29:10.455 [2024-11-29 13:13:10.057343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.455 [2024-11-29 13:13:10.057401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.455 [2024-11-29 13:13:10.057417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.455 [2024-11-29 13:13:10.057425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:10.057431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:10.057447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.455 qpair failed and we were unable to recover it. 00:29:10.455 [2024-11-29 13:13:10.067360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.455 [2024-11-29 13:13:10.067438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.455 [2024-11-29 13:13:10.067456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.455 [2024-11-29 13:13:10.067468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:10.067475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:10.067490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.455 qpair failed and we were unable to recover it. 00:29:10.455 [2024-11-29 13:13:10.077337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.455 [2024-11-29 13:13:10.077398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.455 [2024-11-29 13:13:10.077414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.455 [2024-11-29 13:13:10.077422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:10.077428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:10.077444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.455 qpair failed and we were unable to recover it. 00:29:10.455 [2024-11-29 13:13:10.087386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.455 [2024-11-29 13:13:10.087451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.455 [2024-11-29 13:13:10.087470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.455 [2024-11-29 13:13:10.087477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:10.087484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:10.087502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.455 qpair failed and we were unable to recover it. 00:29:10.455 [2024-11-29 13:13:10.097404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.455 [2024-11-29 13:13:10.097464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.455 [2024-11-29 13:13:10.097479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.455 [2024-11-29 13:13:10.097486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.455 [2024-11-29 13:13:10.097492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.455 [2024-11-29 13:13:10.097508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.456 [2024-11-29 13:13:10.107431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.456 [2024-11-29 13:13:10.107484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.456 [2024-11-29 13:13:10.107499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.456 [2024-11-29 13:13:10.107506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.456 [2024-11-29 13:13:10.107513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.456 [2024-11-29 13:13:10.107532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.456 [2024-11-29 13:13:10.117504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.456 [2024-11-29 13:13:10.117563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.456 [2024-11-29 13:13:10.117578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.456 [2024-11-29 13:13:10.117585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.456 [2024-11-29 13:13:10.117591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.456 [2024-11-29 13:13:10.117606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.456 [2024-11-29 13:13:10.127536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.456 [2024-11-29 13:13:10.127593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.456 [2024-11-29 13:13:10.127608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.456 [2024-11-29 13:13:10.127616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.456 [2024-11-29 13:13:10.127622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.456 [2024-11-29 13:13:10.127637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.456 [2024-11-29 13:13:10.137572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.456 [2024-11-29 13:13:10.137626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.456 [2024-11-29 13:13:10.137641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.456 [2024-11-29 13:13:10.137648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.456 [2024-11-29 13:13:10.137654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.456 [2024-11-29 13:13:10.137669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.456 [2024-11-29 13:13:10.147633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.456 [2024-11-29 13:13:10.147690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.456 [2024-11-29 13:13:10.147705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.456 [2024-11-29 13:13:10.147712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.456 [2024-11-29 13:13:10.147718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.456 [2024-11-29 13:13:10.147733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.456 [2024-11-29 13:13:10.157655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.456 [2024-11-29 13:13:10.157718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.456 [2024-11-29 13:13:10.157735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.456 [2024-11-29 13:13:10.157742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.456 [2024-11-29 13:13:10.157748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.456 [2024-11-29 13:13:10.157764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.456 [2024-11-29 13:13:10.167627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.456 [2024-11-29 13:13:10.167723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.456 [2024-11-29 13:13:10.167739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.456 [2024-11-29 13:13:10.167746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.456 [2024-11-29 13:13:10.167752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.456 [2024-11-29 13:13:10.167768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.456 [2024-11-29 13:13:10.177683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.456 [2024-11-29 13:13:10.177737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.456 [2024-11-29 13:13:10.177752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.456 [2024-11-29 13:13:10.177758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.456 [2024-11-29 13:13:10.177764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.456 [2024-11-29 13:13:10.177779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.456 [2024-11-29 13:13:10.187686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.456 [2024-11-29 13:13:10.187756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.456 [2024-11-29 13:13:10.187770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.456 [2024-11-29 13:13:10.187777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.456 [2024-11-29 13:13:10.187783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.456 [2024-11-29 13:13:10.187798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.456 [2024-11-29 13:13:10.197794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.456 [2024-11-29 13:13:10.197855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.456 [2024-11-29 13:13:10.197870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.456 [2024-11-29 13:13:10.197881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.456 [2024-11-29 13:13:10.197887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.456 [2024-11-29 13:13:10.197903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.456 [2024-11-29 13:13:10.207767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.456 [2024-11-29 13:13:10.207826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.456 [2024-11-29 13:13:10.207839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.456 [2024-11-29 13:13:10.207847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.456 [2024-11-29 13:13:10.207856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.456 [2024-11-29 13:13:10.207872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.456 [2024-11-29 13:13:10.217720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.456 [2024-11-29 13:13:10.217775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.456 [2024-11-29 13:13:10.217790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.456 [2024-11-29 13:13:10.217797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.456 [2024-11-29 13:13:10.217803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.456 [2024-11-29 13:13:10.217818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.456 [2024-11-29 13:13:10.227860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.456 [2024-11-29 13:13:10.227913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.456 [2024-11-29 13:13:10.227927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.456 [2024-11-29 13:13:10.227934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.456 [2024-11-29 13:13:10.227940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.456 [2024-11-29 13:13:10.227961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.456 qpair failed and we were unable to recover it. 00:29:10.457 [2024-11-29 13:13:10.237928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.457 [2024-11-29 13:13:10.238004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.457 [2024-11-29 13:13:10.238023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.457 [2024-11-29 13:13:10.238030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.457 [2024-11-29 13:13:10.238036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.457 [2024-11-29 13:13:10.238055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.457 qpair failed and we were unable to recover it. 00:29:10.457 [2024-11-29 13:13:10.247885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.457 [2024-11-29 13:13:10.247943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.457 [2024-11-29 13:13:10.247961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.457 [2024-11-29 13:13:10.247968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.457 [2024-11-29 13:13:10.247974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.457 [2024-11-29 13:13:10.247988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.457 qpair failed and we were unable to recover it. 00:29:10.457 [2024-11-29 13:13:10.257939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.457 [2024-11-29 13:13:10.258051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.457 [2024-11-29 13:13:10.258069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.457 [2024-11-29 13:13:10.258077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.457 [2024-11-29 13:13:10.258083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.457 [2024-11-29 13:13:10.258100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.457 qpair failed and we were unable to recover it. 00:29:10.717 [2024-11-29 13:13:10.267970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.717 [2024-11-29 13:13:10.268040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.717 [2024-11-29 13:13:10.268059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.717 [2024-11-29 13:13:10.268066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.717 [2024-11-29 13:13:10.268073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.717 [2024-11-29 13:13:10.268090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.717 qpair failed and we were unable to recover it. 00:29:10.717 [2024-11-29 13:13:10.277992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.717 [2024-11-29 13:13:10.278053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.717 [2024-11-29 13:13:10.278070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.717 [2024-11-29 13:13:10.278077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.717 [2024-11-29 13:13:10.278083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.717 [2024-11-29 13:13:10.278099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.717 qpair failed and we were unable to recover it. 00:29:10.717 [2024-11-29 13:13:10.288012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.717 [2024-11-29 13:13:10.288081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.717 [2024-11-29 13:13:10.288098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.717 [2024-11-29 13:13:10.288105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.717 [2024-11-29 13:13:10.288111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.717 [2024-11-29 13:13:10.288127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.717 qpair failed and we were unable to recover it. 00:29:10.717 [2024-11-29 13:13:10.298048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.717 [2024-11-29 13:13:10.298109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.717 [2024-11-29 13:13:10.298124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.717 [2024-11-29 13:13:10.298130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.717 [2024-11-29 13:13:10.298137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.717 [2024-11-29 13:13:10.298152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.717 qpair failed and we were unable to recover it. 00:29:10.717 [2024-11-29 13:13:10.308081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.717 [2024-11-29 13:13:10.308142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.717 [2024-11-29 13:13:10.308157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.717 [2024-11-29 13:13:10.308164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.717 [2024-11-29 13:13:10.308170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.717 [2024-11-29 13:13:10.308185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.717 qpair failed and we were unable to recover it. 00:29:10.717 [2024-11-29 13:13:10.318147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.717 [2024-11-29 13:13:10.318253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.717 [2024-11-29 13:13:10.318267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.717 [2024-11-29 13:13:10.318274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.717 [2024-11-29 13:13:10.318280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.717 [2024-11-29 13:13:10.318295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.717 qpair failed and we were unable to recover it. 00:29:10.717 [2024-11-29 13:13:10.328117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.717 [2024-11-29 13:13:10.328174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.717 [2024-11-29 13:13:10.328191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.717 [2024-11-29 13:13:10.328202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.717 [2024-11-29 13:13:10.328208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.717 [2024-11-29 13:13:10.328224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.717 qpair failed and we were unable to recover it. 00:29:10.717 [2024-11-29 13:13:10.338141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.717 [2024-11-29 13:13:10.338221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.717 [2024-11-29 13:13:10.338236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.717 [2024-11-29 13:13:10.338243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.717 [2024-11-29 13:13:10.338249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.717 [2024-11-29 13:13:10.338265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.717 qpair failed and we were unable to recover it. 00:29:10.717 [2024-11-29 13:13:10.348184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.717 [2024-11-29 13:13:10.348264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.717 [2024-11-29 13:13:10.348279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.717 [2024-11-29 13:13:10.348286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.717 [2024-11-29 13:13:10.348292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.717 [2024-11-29 13:13:10.348309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.717 qpair failed and we were unable to recover it. 00:29:10.717 [2024-11-29 13:13:10.358209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.717 [2024-11-29 13:13:10.358315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.717 [2024-11-29 13:13:10.358331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.717 [2024-11-29 13:13:10.358338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.717 [2024-11-29 13:13:10.358345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.717 [2024-11-29 13:13:10.358359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.717 qpair failed and we were unable to recover it. 00:29:10.717 [2024-11-29 13:13:10.368242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.717 [2024-11-29 13:13:10.368301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.717 [2024-11-29 13:13:10.368316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.717 [2024-11-29 13:13:10.368323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.717 [2024-11-29 13:13:10.368329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.717 [2024-11-29 13:13:10.368350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.378258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.718 [2024-11-29 13:13:10.378310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.718 [2024-11-29 13:13:10.378325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.718 [2024-11-29 13:13:10.378331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.718 [2024-11-29 13:13:10.378337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.718 [2024-11-29 13:13:10.378352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.388284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.718 [2024-11-29 13:13:10.388340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.718 [2024-11-29 13:13:10.388355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.718 [2024-11-29 13:13:10.388362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.718 [2024-11-29 13:13:10.388368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.718 [2024-11-29 13:13:10.388382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.398347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.718 [2024-11-29 13:13:10.398405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.718 [2024-11-29 13:13:10.398420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.718 [2024-11-29 13:13:10.398427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.718 [2024-11-29 13:13:10.398433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.718 [2024-11-29 13:13:10.398448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.408395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.718 [2024-11-29 13:13:10.408453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.718 [2024-11-29 13:13:10.408467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.718 [2024-11-29 13:13:10.408474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.718 [2024-11-29 13:13:10.408480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.718 [2024-11-29 13:13:10.408495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.418433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.718 [2024-11-29 13:13:10.418494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.718 [2024-11-29 13:13:10.418508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.718 [2024-11-29 13:13:10.418515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.718 [2024-11-29 13:13:10.418521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.718 [2024-11-29 13:13:10.418536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.428408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.718 [2024-11-29 13:13:10.428457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.718 [2024-11-29 13:13:10.428472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.718 [2024-11-29 13:13:10.428479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.718 [2024-11-29 13:13:10.428485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.718 [2024-11-29 13:13:10.428500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.438448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.718 [2024-11-29 13:13:10.438513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.718 [2024-11-29 13:13:10.438527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.718 [2024-11-29 13:13:10.438534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.718 [2024-11-29 13:13:10.438540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.718 [2024-11-29 13:13:10.438555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.448488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.718 [2024-11-29 13:13:10.448545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.718 [2024-11-29 13:13:10.448559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.718 [2024-11-29 13:13:10.448566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.718 [2024-11-29 13:13:10.448572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.718 [2024-11-29 13:13:10.448586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.458477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.718 [2024-11-29 13:13:10.458538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.718 [2024-11-29 13:13:10.458556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.718 [2024-11-29 13:13:10.458564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.718 [2024-11-29 13:13:10.458570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.718 [2024-11-29 13:13:10.458585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.468543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.718 [2024-11-29 13:13:10.468600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.718 [2024-11-29 13:13:10.468615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.718 [2024-11-29 13:13:10.468622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.718 [2024-11-29 13:13:10.468628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.718 [2024-11-29 13:13:10.468643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.478548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.718 [2024-11-29 13:13:10.478608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.718 [2024-11-29 13:13:10.478622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.718 [2024-11-29 13:13:10.478629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.718 [2024-11-29 13:13:10.478635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.718 [2024-11-29 13:13:10.478649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.488580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.718 [2024-11-29 13:13:10.488640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.718 [2024-11-29 13:13:10.488654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.718 [2024-11-29 13:13:10.488661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.718 [2024-11-29 13:13:10.488667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.718 [2024-11-29 13:13:10.488681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.498589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.718 [2024-11-29 13:13:10.498643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.718 [2024-11-29 13:13:10.498658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.718 [2024-11-29 13:13:10.498665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.718 [2024-11-29 13:13:10.498671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.718 [2024-11-29 13:13:10.498689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.718 qpair failed and we were unable to recover it. 00:29:10.718 [2024-11-29 13:13:10.508640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.719 [2024-11-29 13:13:10.508693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.719 [2024-11-29 13:13:10.508707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.719 [2024-11-29 13:13:10.508714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.719 [2024-11-29 13:13:10.508720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.719 [2024-11-29 13:13:10.508734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.719 qpair failed and we were unable to recover it. 00:29:10.719 [2024-11-29 13:13:10.518673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.719 [2024-11-29 13:13:10.518730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.719 [2024-11-29 13:13:10.518745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.719 [2024-11-29 13:13:10.518752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.719 [2024-11-29 13:13:10.518758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.719 [2024-11-29 13:13:10.518772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.719 qpair failed and we were unable to recover it. 00:29:10.719 [2024-11-29 13:13:10.528746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.719 [2024-11-29 13:13:10.528810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.719 [2024-11-29 13:13:10.528825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.719 [2024-11-29 13:13:10.528831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.719 [2024-11-29 13:13:10.528838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.719 [2024-11-29 13:13:10.528852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.719 qpair failed and we were unable to recover it. 00:29:10.979 [2024-11-29 13:13:10.538724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.979 [2024-11-29 13:13:10.538778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.979 [2024-11-29 13:13:10.538796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.979 [2024-11-29 13:13:10.538802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.979 [2024-11-29 13:13:10.538808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.979 [2024-11-29 13:13:10.538824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-11-29 13:13:10.548741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.979 [2024-11-29 13:13:10.548799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.979 [2024-11-29 13:13:10.548816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.979 [2024-11-29 13:13:10.548823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.979 [2024-11-29 13:13:10.548829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.979 [2024-11-29 13:13:10.548846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-11-29 13:13:10.558793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.979 [2024-11-29 13:13:10.558865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.979 [2024-11-29 13:13:10.558881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.979 [2024-11-29 13:13:10.558888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.979 [2024-11-29 13:13:10.558894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.979 [2024-11-29 13:13:10.558909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-11-29 13:13:10.568824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.979 [2024-11-29 13:13:10.568881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.979 [2024-11-29 13:13:10.568897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.979 [2024-11-29 13:13:10.568905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.979 [2024-11-29 13:13:10.568911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.979 [2024-11-29 13:13:10.568927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-11-29 13:13:10.578860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.979 [2024-11-29 13:13:10.578919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.979 [2024-11-29 13:13:10.578934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.979 [2024-11-29 13:13:10.578941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.979 [2024-11-29 13:13:10.578950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.979 [2024-11-29 13:13:10.578965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-11-29 13:13:10.588863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.979 [2024-11-29 13:13:10.588916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.979 [2024-11-29 13:13:10.588933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.979 [2024-11-29 13:13:10.588940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.979 [2024-11-29 13:13:10.588946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.979 [2024-11-29 13:13:10.588965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-11-29 13:13:10.598902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.979 [2024-11-29 13:13:10.598964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.979 [2024-11-29 13:13:10.598979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.979 [2024-11-29 13:13:10.598987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.979 [2024-11-29 13:13:10.598993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.979 [2024-11-29 13:13:10.599008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-11-29 13:13:10.608934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.979 [2024-11-29 13:13:10.609001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.979 [2024-11-29 13:13:10.609016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.979 [2024-11-29 13:13:10.609023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.979 [2024-11-29 13:13:10.609030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.979 [2024-11-29 13:13:10.609044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.979 qpair failed and we were unable to recover it. 00:29:10.979 [2024-11-29 13:13:10.618985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.979 [2024-11-29 13:13:10.619048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.619063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.980 [2024-11-29 13:13:10.619070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.980 [2024-11-29 13:13:10.619076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.980 [2024-11-29 13:13:10.619092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-11-29 13:13:10.628986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.980 [2024-11-29 13:13:10.629046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.629061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.980 [2024-11-29 13:13:10.629068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.980 [2024-11-29 13:13:10.629075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.980 [2024-11-29 13:13:10.629094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-11-29 13:13:10.638960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.980 [2024-11-29 13:13:10.639020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.639034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.980 [2024-11-29 13:13:10.639041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.980 [2024-11-29 13:13:10.639047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.980 [2024-11-29 13:13:10.639061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-11-29 13:13:10.648982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.980 [2024-11-29 13:13:10.649043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.649057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.980 [2024-11-29 13:13:10.649064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.980 [2024-11-29 13:13:10.649070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.980 [2024-11-29 13:13:10.649086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-11-29 13:13:10.659093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.980 [2024-11-29 13:13:10.659149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.659164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.980 [2024-11-29 13:13:10.659171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.980 [2024-11-29 13:13:10.659177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.980 [2024-11-29 13:13:10.659193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-11-29 13:13:10.669119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.980 [2024-11-29 13:13:10.669176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.669190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.980 [2024-11-29 13:13:10.669197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.980 [2024-11-29 13:13:10.669203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1dfabe0 00:29:10.980 [2024-11-29 13:13:10.669218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-11-29 13:13:10.679156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.980 [2024-11-29 13:13:10.679225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.679254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.980 [2024-11-29 13:13:10.679266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.980 [2024-11-29 13:13:10.679277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:10.980 [2024-11-29 13:13:10.679302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-11-29 13:13:10.689143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.980 [2024-11-29 13:13:10.689200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.689214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.980 [2024-11-29 13:13:10.689221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.980 [2024-11-29 13:13:10.689228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:10.980 [2024-11-29 13:13:10.689244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-11-29 13:13:10.699166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.980 [2024-11-29 13:13:10.699224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.699239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.980 [2024-11-29 13:13:10.699246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.980 [2024-11-29 13:13:10.699252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:10.980 [2024-11-29 13:13:10.699267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-11-29 13:13:10.709207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.980 [2024-11-29 13:13:10.709265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.709280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.980 [2024-11-29 13:13:10.709287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.980 [2024-11-29 13:13:10.709293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:10.980 [2024-11-29 13:13:10.709308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-11-29 13:13:10.719255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.980 [2024-11-29 13:13:10.719340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.719358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.980 [2024-11-29 13:13:10.719365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.980 [2024-11-29 13:13:10.719372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:10.980 [2024-11-29 13:13:10.719389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-11-29 13:13:10.729269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.980 [2024-11-29 13:13:10.729323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.729337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.980 [2024-11-29 13:13:10.729344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.980 [2024-11-29 13:13:10.729350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:10.980 [2024-11-29 13:13:10.729366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-11-29 13:13:10.739430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.980 [2024-11-29 13:13:10.739532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.739547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.980 [2024-11-29 13:13:10.739554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.980 [2024-11-29 13:13:10.739560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:10.980 [2024-11-29 13:13:10.739576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.980 qpair failed and we were unable to recover it. 00:29:10.980 [2024-11-29 13:13:10.749360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.980 [2024-11-29 13:13:10.749423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.980 [2024-11-29 13:13:10.749437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.981 [2024-11-29 13:13:10.749444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.981 [2024-11-29 13:13:10.749450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:10.981 [2024-11-29 13:13:10.749465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-11-29 13:13:10.759385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.981 [2024-11-29 13:13:10.759445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.981 [2024-11-29 13:13:10.759459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.981 [2024-11-29 13:13:10.759466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.981 [2024-11-29 13:13:10.759476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:10.981 [2024-11-29 13:13:10.759492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-11-29 13:13:10.769423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.981 [2024-11-29 13:13:10.769482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.981 [2024-11-29 13:13:10.769496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.981 [2024-11-29 13:13:10.769503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.981 [2024-11-29 13:13:10.769509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:10.981 [2024-11-29 13:13:10.769524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-11-29 13:13:10.779419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.981 [2024-11-29 13:13:10.779480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.981 [2024-11-29 13:13:10.779494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.981 [2024-11-29 13:13:10.779501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.981 [2024-11-29 13:13:10.779507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:10.981 [2024-11-29 13:13:10.779522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.981 qpair failed and we were unable to recover it. 00:29:10.981 [2024-11-29 13:13:10.789440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.981 [2024-11-29 13:13:10.789494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.981 [2024-11-29 13:13:10.789508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.981 [2024-11-29 13:13:10.789515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.981 [2024-11-29 13:13:10.789521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:10.981 [2024-11-29 13:13:10.789537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:10.981 qpair failed and we were unable to recover it. 00:29:11.241 [2024-11-29 13:13:10.799489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.241 [2024-11-29 13:13:10.799548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.241 [2024-11-29 13:13:10.799562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.241 [2024-11-29 13:13:10.799569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.241 [2024-11-29 13:13:10.799575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.241 [2024-11-29 13:13:10.799590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.242 qpair failed and we were unable to recover it. 00:29:11.242 [2024-11-29 13:13:10.809511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.242 [2024-11-29 13:13:10.809582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.242 [2024-11-29 13:13:10.809596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.242 [2024-11-29 13:13:10.809603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.242 [2024-11-29 13:13:10.809610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.242 [2024-11-29 13:13:10.809625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.242 qpair failed and we were unable to recover it. 00:29:11.242 [2024-11-29 13:13:10.819535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.242 [2024-11-29 13:13:10.819592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.242 [2024-11-29 13:13:10.819607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.242 [2024-11-29 13:13:10.819613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.242 [2024-11-29 13:13:10.819619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.242 [2024-11-29 13:13:10.819634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.242 qpair failed and we were unable to recover it. 00:29:11.242 [2024-11-29 13:13:10.829562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.242 [2024-11-29 13:13:10.829622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.242 [2024-11-29 13:13:10.829637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.242 [2024-11-29 13:13:10.829644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.242 [2024-11-29 13:13:10.829650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.242 [2024-11-29 13:13:10.829665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.242 qpair failed and we were unable to recover it. 00:29:11.242 [2024-11-29 13:13:10.839588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.242 [2024-11-29 13:13:10.839648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.242 [2024-11-29 13:13:10.839661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.242 [2024-11-29 13:13:10.839668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.242 [2024-11-29 13:13:10.839674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.242 [2024-11-29 13:13:10.839690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.242 qpair failed and we were unable to recover it. 00:29:11.242 [2024-11-29 13:13:10.849611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.242 [2024-11-29 13:13:10.849667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.242 [2024-11-29 13:13:10.849684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.242 [2024-11-29 13:13:10.849691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.242 [2024-11-29 13:13:10.849697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.242 [2024-11-29 13:13:10.849713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.242 qpair failed and we were unable to recover it. 00:29:11.242 [2024-11-29 13:13:10.859618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.242 [2024-11-29 13:13:10.859675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.242 [2024-11-29 13:13:10.859689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.242 [2024-11-29 13:13:10.859696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.242 [2024-11-29 13:13:10.859702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.242 [2024-11-29 13:13:10.859717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.242 qpair failed and we were unable to recover it. 00:29:11.242 [2024-11-29 13:13:10.869642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.242 [2024-11-29 13:13:10.869700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.242 [2024-11-29 13:13:10.869716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.242 [2024-11-29 13:13:10.869723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.242 [2024-11-29 13:13:10.869729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.242 [2024-11-29 13:13:10.869745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.242 qpair failed and we were unable to recover it. 00:29:11.242 [2024-11-29 13:13:10.879745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.242 [2024-11-29 13:13:10.879822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.242 [2024-11-29 13:13:10.879836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.242 [2024-11-29 13:13:10.879843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.242 [2024-11-29 13:13:10.879849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.242 [2024-11-29 13:13:10.879864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.242 qpair failed and we were unable to recover it. 00:29:11.242 [2024-11-29 13:13:10.889751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.242 [2024-11-29 13:13:10.889805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.242 [2024-11-29 13:13:10.889819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.242 [2024-11-29 13:13:10.889826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.242 [2024-11-29 13:13:10.889835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.242 [2024-11-29 13:13:10.889851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.242 qpair failed and we were unable to recover it. 00:29:11.242 [2024-11-29 13:13:10.899760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.242 [2024-11-29 13:13:10.899817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.242 [2024-11-29 13:13:10.899832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.242 [2024-11-29 13:13:10.899838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.242 [2024-11-29 13:13:10.899844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.242 [2024-11-29 13:13:10.899860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.242 qpair failed and we were unable to recover it. 00:29:11.242 [2024-11-29 13:13:10.909812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.242 [2024-11-29 13:13:10.909868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.242 [2024-11-29 13:13:10.909882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.242 [2024-11-29 13:13:10.909888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.242 [2024-11-29 13:13:10.909894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.242 [2024-11-29 13:13:10.909909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.242 qpair failed and we were unable to recover it. 00:29:11.242 [2024-11-29 13:13:10.919879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.242 [2024-11-29 13:13:10.919942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.242 [2024-11-29 13:13:10.919960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.242 [2024-11-29 13:13:10.919966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.242 [2024-11-29 13:13:10.919973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.242 [2024-11-29 13:13:10.919988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.242 qpair failed and we were unable to recover it. 00:29:11.242 [2024-11-29 13:13:10.929844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.242 [2024-11-29 13:13:10.929901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.242 [2024-11-29 13:13:10.929915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.242 [2024-11-29 13:13:10.929922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.242 [2024-11-29 13:13:10.929928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.243 [2024-11-29 13:13:10.929943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.243 qpair failed and we were unable to recover it. 00:29:11.243 [2024-11-29 13:13:10.939861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.243 [2024-11-29 13:13:10.939917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.243 [2024-11-29 13:13:10.939931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.243 [2024-11-29 13:13:10.939938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.243 [2024-11-29 13:13:10.939944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.243 [2024-11-29 13:13:10.939965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.243 qpair failed and we were unable to recover it. 00:29:11.243 [2024-11-29 13:13:10.949890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.243 [2024-11-29 13:13:10.949949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.243 [2024-11-29 13:13:10.949964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.243 [2024-11-29 13:13:10.949971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.243 [2024-11-29 13:13:10.949977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.243 [2024-11-29 13:13:10.949992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.243 qpair failed and we were unable to recover it. 00:29:11.243 [2024-11-29 13:13:10.959913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.243 [2024-11-29 13:13:10.959973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.243 [2024-11-29 13:13:10.959987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.243 [2024-11-29 13:13:10.959993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.243 [2024-11-29 13:13:10.959999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.243 [2024-11-29 13:13:10.960014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.243 qpair failed and we were unable to recover it. 00:29:11.243 [2024-11-29 13:13:10.969957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.243 [2024-11-29 13:13:10.970015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.243 [2024-11-29 13:13:10.970028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.243 [2024-11-29 13:13:10.970035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.243 [2024-11-29 13:13:10.970041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.243 [2024-11-29 13:13:10.970056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.243 qpair failed and we were unable to recover it. 00:29:11.243 [2024-11-29 13:13:10.979984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.243 [2024-11-29 13:13:10.980040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.243 [2024-11-29 13:13:10.980057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.243 [2024-11-29 13:13:10.980063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.243 [2024-11-29 13:13:10.980069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.243 [2024-11-29 13:13:10.980084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.243 qpair failed and we were unable to recover it. 00:29:11.243 [2024-11-29 13:13:10.990010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.243 [2024-11-29 13:13:10.990073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.243 [2024-11-29 13:13:10.990086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.243 [2024-11-29 13:13:10.990093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.243 [2024-11-29 13:13:10.990099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.243 [2024-11-29 13:13:10.990114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.243 qpair failed and we were unable to recover it. 00:29:11.243 [2024-11-29 13:13:11.000091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.243 [2024-11-29 13:13:11.000150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.243 [2024-11-29 13:13:11.000163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.243 [2024-11-29 13:13:11.000170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.243 [2024-11-29 13:13:11.000176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.243 [2024-11-29 13:13:11.000191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.243 qpair failed and we were unable to recover it. 00:29:11.243 [2024-11-29 13:13:11.010109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.243 [2024-11-29 13:13:11.010175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.243 [2024-11-29 13:13:11.010188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.243 [2024-11-29 13:13:11.010195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.243 [2024-11-29 13:13:11.010201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.243 [2024-11-29 13:13:11.010216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.243 qpair failed and we were unable to recover it. 00:29:11.243 [2024-11-29 13:13:11.020082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.243 [2024-11-29 13:13:11.020141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.243 [2024-11-29 13:13:11.020154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.243 [2024-11-29 13:13:11.020163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.243 [2024-11-29 13:13:11.020170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.243 [2024-11-29 13:13:11.020185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.243 qpair failed and we were unable to recover it. 00:29:11.243 [2024-11-29 13:13:11.030119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.243 [2024-11-29 13:13:11.030177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.243 [2024-11-29 13:13:11.030191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.243 [2024-11-29 13:13:11.030197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.243 [2024-11-29 13:13:11.030203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.243 [2024-11-29 13:13:11.030218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.243 qpair failed and we were unable to recover it. 00:29:11.243 [2024-11-29 13:13:11.040186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.243 [2024-11-29 13:13:11.040262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.243 [2024-11-29 13:13:11.040279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.243 [2024-11-29 13:13:11.040286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.243 [2024-11-29 13:13:11.040292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.243 [2024-11-29 13:13:11.040309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.243 qpair failed and we were unable to recover it. 00:29:11.243 [2024-11-29 13:13:11.050185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.243 [2024-11-29 13:13:11.050269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.243 [2024-11-29 13:13:11.050282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.243 [2024-11-29 13:13:11.050289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.243 [2024-11-29 13:13:11.050295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.243 [2024-11-29 13:13:11.050311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.243 qpair failed and we were unable to recover it. 00:29:11.504 [2024-11-29 13:13:11.060224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.504 [2024-11-29 13:13:11.060286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.504 [2024-11-29 13:13:11.060300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.504 [2024-11-29 13:13:11.060307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.504 [2024-11-29 13:13:11.060313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.504 [2024-11-29 13:13:11.060331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-11-29 13:13:11.070250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.504 [2024-11-29 13:13:11.070346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.504 [2024-11-29 13:13:11.070361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.504 [2024-11-29 13:13:11.070368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.504 [2024-11-29 13:13:11.070374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.504 [2024-11-29 13:13:11.070389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-11-29 13:13:11.080301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.504 [2024-11-29 13:13:11.080361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.504 [2024-11-29 13:13:11.080374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.504 [2024-11-29 13:13:11.080381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.504 [2024-11-29 13:13:11.080387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.504 [2024-11-29 13:13:11.080403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-11-29 13:13:11.090296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.504 [2024-11-29 13:13:11.090356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.504 [2024-11-29 13:13:11.090370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.504 [2024-11-29 13:13:11.090377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.504 [2024-11-29 13:13:11.090383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.504 [2024-11-29 13:13:11.090397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-11-29 13:13:11.100322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.504 [2024-11-29 13:13:11.100379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.504 [2024-11-29 13:13:11.100392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.504 [2024-11-29 13:13:11.100399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.504 [2024-11-29 13:13:11.100405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.504 [2024-11-29 13:13:11.100421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-11-29 13:13:11.110355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.504 [2024-11-29 13:13:11.110414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.504 [2024-11-29 13:13:11.110430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.504 [2024-11-29 13:13:11.110436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.504 [2024-11-29 13:13:11.110443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.504 [2024-11-29 13:13:11.110458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.504 qpair failed and we were unable to recover it. 00:29:11.504 [2024-11-29 13:13:11.120406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.504 [2024-11-29 13:13:11.120463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.504 [2024-11-29 13:13:11.120477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.504 [2024-11-29 13:13:11.120484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.504 [2024-11-29 13:13:11.120489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.504 [2024-11-29 13:13:11.120504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-11-29 13:13:11.130417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.505 [2024-11-29 13:13:11.130472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.505 [2024-11-29 13:13:11.130486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.505 [2024-11-29 13:13:11.130492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.505 [2024-11-29 13:13:11.130498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.505 [2024-11-29 13:13:11.130514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-11-29 13:13:11.140447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.505 [2024-11-29 13:13:11.140505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.505 [2024-11-29 13:13:11.140518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.505 [2024-11-29 13:13:11.140525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.505 [2024-11-29 13:13:11.140531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.505 [2024-11-29 13:13:11.140547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-11-29 13:13:11.150469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.505 [2024-11-29 13:13:11.150523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.505 [2024-11-29 13:13:11.150537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.505 [2024-11-29 13:13:11.150546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.505 [2024-11-29 13:13:11.150552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.505 [2024-11-29 13:13:11.150568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-11-29 13:13:11.160496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.505 [2024-11-29 13:13:11.160555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.505 [2024-11-29 13:13:11.160569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.505 [2024-11-29 13:13:11.160575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.505 [2024-11-29 13:13:11.160581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.505 [2024-11-29 13:13:11.160597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-11-29 13:13:11.170545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.505 [2024-11-29 13:13:11.170603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.505 [2024-11-29 13:13:11.170617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.505 [2024-11-29 13:13:11.170624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.505 [2024-11-29 13:13:11.170629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.505 [2024-11-29 13:13:11.170645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-11-29 13:13:11.180551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.505 [2024-11-29 13:13:11.180607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.505 [2024-11-29 13:13:11.180621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.505 [2024-11-29 13:13:11.180628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.505 [2024-11-29 13:13:11.180633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.505 [2024-11-29 13:13:11.180649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-11-29 13:13:11.190578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.505 [2024-11-29 13:13:11.190636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.505 [2024-11-29 13:13:11.190649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.505 [2024-11-29 13:13:11.190656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.505 [2024-11-29 13:13:11.190662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.505 [2024-11-29 13:13:11.190681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-11-29 13:13:11.200612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.505 [2024-11-29 13:13:11.200671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.505 [2024-11-29 13:13:11.200685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.505 [2024-11-29 13:13:11.200691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.505 [2024-11-29 13:13:11.200697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.505 [2024-11-29 13:13:11.200712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-11-29 13:13:11.210664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.505 [2024-11-29 13:13:11.210722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.505 [2024-11-29 13:13:11.210735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.505 [2024-11-29 13:13:11.210742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.505 [2024-11-29 13:13:11.210748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.505 [2024-11-29 13:13:11.210763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-11-29 13:13:11.220688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.505 [2024-11-29 13:13:11.220776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.505 [2024-11-29 13:13:11.220791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.505 [2024-11-29 13:13:11.220797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.505 [2024-11-29 13:13:11.220803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.505 [2024-11-29 13:13:11.220819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-11-29 13:13:11.230695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.505 [2024-11-29 13:13:11.230754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.505 [2024-11-29 13:13:11.230768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.505 [2024-11-29 13:13:11.230774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.505 [2024-11-29 13:13:11.230780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.505 [2024-11-29 13:13:11.230795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-11-29 13:13:11.240722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.505 [2024-11-29 13:13:11.240813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.505 [2024-11-29 13:13:11.240828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.505 [2024-11-29 13:13:11.240835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.505 [2024-11-29 13:13:11.240841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.505 [2024-11-29 13:13:11.240856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.505 [2024-11-29 13:13:11.250789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.505 [2024-11-29 13:13:11.250843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.505 [2024-11-29 13:13:11.250857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.505 [2024-11-29 13:13:11.250864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.505 [2024-11-29 13:13:11.250870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.505 [2024-11-29 13:13:11.250885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.505 qpair failed and we were unable to recover it. 00:29:11.506 [2024-11-29 13:13:11.260803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.506 [2024-11-29 13:13:11.260861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.506 [2024-11-29 13:13:11.260875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.506 [2024-11-29 13:13:11.260882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.506 [2024-11-29 13:13:11.260888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.506 [2024-11-29 13:13:11.260903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-11-29 13:13:11.270750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.506 [2024-11-29 13:13:11.270802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.506 [2024-11-29 13:13:11.270816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.506 [2024-11-29 13:13:11.270822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.506 [2024-11-29 13:13:11.270828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.506 [2024-11-29 13:13:11.270845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-11-29 13:13:11.280875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.506 [2024-11-29 13:13:11.280935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.506 [2024-11-29 13:13:11.280957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.506 [2024-11-29 13:13:11.280965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.506 [2024-11-29 13:13:11.280971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.506 [2024-11-29 13:13:11.280986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-11-29 13:13:11.290850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.506 [2024-11-29 13:13:11.290957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.506 [2024-11-29 13:13:11.290971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.506 [2024-11-29 13:13:11.290978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.506 [2024-11-29 13:13:11.290985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.506 [2024-11-29 13:13:11.291001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-11-29 13:13:11.300887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.506 [2024-11-29 13:13:11.300946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.506 [2024-11-29 13:13:11.300965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.506 [2024-11-29 13:13:11.300972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.506 [2024-11-29 13:13:11.300978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.506 [2024-11-29 13:13:11.300993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-11-29 13:13:11.310900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.506 [2024-11-29 13:13:11.310963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.506 [2024-11-29 13:13:11.310977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.506 [2024-11-29 13:13:11.310984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.506 [2024-11-29 13:13:11.310990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.506 [2024-11-29 13:13:11.311005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.506 [2024-11-29 13:13:11.320943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.506 [2024-11-29 13:13:11.321015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.506 [2024-11-29 13:13:11.321029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.506 [2024-11-29 13:13:11.321036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.506 [2024-11-29 13:13:11.321048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.506 [2024-11-29 13:13:11.321064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.506 qpair failed and we were unable to recover it. 00:29:11.766 [2024-11-29 13:13:11.330976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.766 [2024-11-29 13:13:11.331041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.766 [2024-11-29 13:13:11.331054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.766 [2024-11-29 13:13:11.331061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.766 [2024-11-29 13:13:11.331067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.766 [2024-11-29 13:13:11.331082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.766 qpair failed and we were unable to recover it. 00:29:11.766 [2024-11-29 13:13:11.340993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.766 [2024-11-29 13:13:11.341051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.766 [2024-11-29 13:13:11.341064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.766 [2024-11-29 13:13:11.341071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.766 [2024-11-29 13:13:11.341077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.766 [2024-11-29 13:13:11.341092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.766 qpair failed and we were unable to recover it. 00:29:11.766 [2024-11-29 13:13:11.351050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.766 [2024-11-29 13:13:11.351108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.766 [2024-11-29 13:13:11.351121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.766 [2024-11-29 13:13:11.351127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.766 [2024-11-29 13:13:11.351133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.766 [2024-11-29 13:13:11.351148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.766 qpair failed and we were unable to recover it. 00:29:11.766 [2024-11-29 13:13:11.361057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.766 [2024-11-29 13:13:11.361117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.766 [2024-11-29 13:13:11.361131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.766 [2024-11-29 13:13:11.361137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.766 [2024-11-29 13:13:11.361143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.766 [2024-11-29 13:13:11.361159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.766 qpair failed and we were unable to recover it. 00:29:11.766 [2024-11-29 13:13:11.371143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.766 [2024-11-29 13:13:11.371201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.766 [2024-11-29 13:13:11.371215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.766 [2024-11-29 13:13:11.371221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.766 [2024-11-29 13:13:11.371227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.767 [2024-11-29 13:13:11.371242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.767 qpair failed and we were unable to recover it. 00:29:11.767 [2024-11-29 13:13:11.381044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.767 [2024-11-29 13:13:11.381103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.767 [2024-11-29 13:13:11.381117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.767 [2024-11-29 13:13:11.381124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.767 [2024-11-29 13:13:11.381129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.767 [2024-11-29 13:13:11.381145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.767 qpair failed and we were unable to recover it. 00:29:11.767 [2024-11-29 13:13:11.391121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.767 [2024-11-29 13:13:11.391171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.767 [2024-11-29 13:13:11.391185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.767 [2024-11-29 13:13:11.391192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.767 [2024-11-29 13:13:11.391198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.767 [2024-11-29 13:13:11.391212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.767 qpair failed and we were unable to recover it. 00:29:11.767 [2024-11-29 13:13:11.401134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.767 [2024-11-29 13:13:11.401192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.767 [2024-11-29 13:13:11.401207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.767 [2024-11-29 13:13:11.401213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.767 [2024-11-29 13:13:11.401219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.767 [2024-11-29 13:13:11.401233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.767 qpair failed and we were unable to recover it. 00:29:11.767 [2024-11-29 13:13:11.411232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.767 [2024-11-29 13:13:11.411291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.767 [2024-11-29 13:13:11.411307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.767 [2024-11-29 13:13:11.411314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.767 [2024-11-29 13:13:11.411320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.767 [2024-11-29 13:13:11.411335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.767 qpair failed and we were unable to recover it. 00:29:11.767 [2024-11-29 13:13:11.421156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.767 [2024-11-29 13:13:11.421210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.767 [2024-11-29 13:13:11.421224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.767 [2024-11-29 13:13:11.421230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.767 [2024-11-29 13:13:11.421236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.767 [2024-11-29 13:13:11.421250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.767 qpair failed and we were unable to recover it. 00:29:11.767 [2024-11-29 13:13:11.431226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.767 [2024-11-29 13:13:11.431278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.767 [2024-11-29 13:13:11.431291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.767 [2024-11-29 13:13:11.431298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.767 [2024-11-29 13:13:11.431304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.767 [2024-11-29 13:13:11.431318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.767 qpair failed and we were unable to recover it. 00:29:11.767 [2024-11-29 13:13:11.441322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.767 [2024-11-29 13:13:11.441379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.767 [2024-11-29 13:13:11.441393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.767 [2024-11-29 13:13:11.441400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.767 [2024-11-29 13:13:11.441406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.767 [2024-11-29 13:13:11.441421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.767 qpair failed and we were unable to recover it. 00:29:11.767 [2024-11-29 13:13:11.451237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.767 [2024-11-29 13:13:11.451289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.767 [2024-11-29 13:13:11.451303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.767 [2024-11-29 13:13:11.451309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.767 [2024-11-29 13:13:11.451318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.767 [2024-11-29 13:13:11.451333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.767 qpair failed and we were unable to recover it. 00:29:11.767 [2024-11-29 13:13:11.461347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.767 [2024-11-29 13:13:11.461423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.767 [2024-11-29 13:13:11.461438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.767 [2024-11-29 13:13:11.461445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.767 [2024-11-29 13:13:11.461451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.767 [2024-11-29 13:13:11.461468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.767 qpair failed and we were unable to recover it. 00:29:11.767 [2024-11-29 13:13:11.471407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.767 [2024-11-29 13:13:11.471458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.767 [2024-11-29 13:13:11.471472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.767 [2024-11-29 13:13:11.471479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.767 [2024-11-29 13:13:11.471485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.767 [2024-11-29 13:13:11.471501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.767 qpair failed and we were unable to recover it. 00:29:11.767 [2024-11-29 13:13:11.481340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.767 [2024-11-29 13:13:11.481398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.767 [2024-11-29 13:13:11.481412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.767 [2024-11-29 13:13:11.481419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.767 [2024-11-29 13:13:11.481425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.767 [2024-11-29 13:13:11.481440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.767 qpair failed and we were unable to recover it. 00:29:11.767 [2024-11-29 13:13:11.491461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.767 [2024-11-29 13:13:11.491517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.767 [2024-11-29 13:13:11.491531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.767 [2024-11-29 13:13:11.491538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.767 [2024-11-29 13:13:11.491543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.767 [2024-11-29 13:13:11.491559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.767 qpair failed and we were unable to recover it. 00:29:11.767 [2024-11-29 13:13:11.501433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.767 [2024-11-29 13:13:11.501489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.767 [2024-11-29 13:13:11.501502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.768 [2024-11-29 13:13:11.501509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.768 [2024-11-29 13:13:11.501515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.768 [2024-11-29 13:13:11.501530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.768 qpair failed and we were unable to recover it. 00:29:11.768 [2024-11-29 13:13:11.511466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.768 [2024-11-29 13:13:11.511523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.768 [2024-11-29 13:13:11.511537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.768 [2024-11-29 13:13:11.511544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.768 [2024-11-29 13:13:11.511550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.768 [2024-11-29 13:13:11.511565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.768 qpair failed and we were unable to recover it. 00:29:11.768 [2024-11-29 13:13:11.521447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.768 [2024-11-29 13:13:11.521503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.768 [2024-11-29 13:13:11.521517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.768 [2024-11-29 13:13:11.521524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.768 [2024-11-29 13:13:11.521530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.768 [2024-11-29 13:13:11.521544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.768 qpair failed and we were unable to recover it. 00:29:11.768 [2024-11-29 13:13:11.531548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.768 [2024-11-29 13:13:11.531626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.768 [2024-11-29 13:13:11.531640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.768 [2024-11-29 13:13:11.531647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.768 [2024-11-29 13:13:11.531654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.768 [2024-11-29 13:13:11.531668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.768 qpair failed and we were unable to recover it. 00:29:11.768 [2024-11-29 13:13:11.541564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.768 [2024-11-29 13:13:11.541618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.768 [2024-11-29 13:13:11.541636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.768 [2024-11-29 13:13:11.541643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.768 [2024-11-29 13:13:11.541649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.768 [2024-11-29 13:13:11.541664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.768 qpair failed and we were unable to recover it. 00:29:11.768 [2024-11-29 13:13:11.551584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.768 [2024-11-29 13:13:11.551684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.768 [2024-11-29 13:13:11.551698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.768 [2024-11-29 13:13:11.551705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.768 [2024-11-29 13:13:11.551711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.768 [2024-11-29 13:13:11.551727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.768 qpair failed and we were unable to recover it. 00:29:11.768 [2024-11-29 13:13:11.561615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.768 [2024-11-29 13:13:11.561715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.768 [2024-11-29 13:13:11.561730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.768 [2024-11-29 13:13:11.561737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.768 [2024-11-29 13:13:11.561743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.768 [2024-11-29 13:13:11.561758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.768 qpair failed and we were unable to recover it. 00:29:11.768 [2024-11-29 13:13:11.571596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.768 [2024-11-29 13:13:11.571651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.768 [2024-11-29 13:13:11.571665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.768 [2024-11-29 13:13:11.571672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.768 [2024-11-29 13:13:11.571678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.768 [2024-11-29 13:13:11.571693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.768 qpair failed and we were unable to recover it. 00:29:11.768 [2024-11-29 13:13:11.581677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.768 [2024-11-29 13:13:11.581732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.768 [2024-11-29 13:13:11.581746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.768 [2024-11-29 13:13:11.581756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.768 [2024-11-29 13:13:11.581762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:11.768 [2024-11-29 13:13:11.581777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.768 qpair failed and we were unable to recover it. 00:29:12.029 [2024-11-29 13:13:11.591660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.029 [2024-11-29 13:13:11.591743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.029 [2024-11-29 13:13:11.591758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.029 [2024-11-29 13:13:11.591766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.029 [2024-11-29 13:13:11.591772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.029 [2024-11-29 13:13:11.591787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-11-29 13:13:11.601687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.029 [2024-11-29 13:13:11.601745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.029 [2024-11-29 13:13:11.601759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.029 [2024-11-29 13:13:11.601765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.029 [2024-11-29 13:13:11.601771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.029 [2024-11-29 13:13:11.601786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-11-29 13:13:11.611706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.029 [2024-11-29 13:13:11.611761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.029 [2024-11-29 13:13:11.611775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.029 [2024-11-29 13:13:11.611782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.029 [2024-11-29 13:13:11.611788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.029 [2024-11-29 13:13:11.611802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-11-29 13:13:11.621793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.029 [2024-11-29 13:13:11.621850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.029 [2024-11-29 13:13:11.621865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.029 [2024-11-29 13:13:11.621872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.029 [2024-11-29 13:13:11.621878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.029 [2024-11-29 13:13:11.621899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-11-29 13:13:11.631868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.029 [2024-11-29 13:13:11.631926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.029 [2024-11-29 13:13:11.631939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.029 [2024-11-29 13:13:11.631951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.029 [2024-11-29 13:13:11.631958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.029 [2024-11-29 13:13:11.631972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-11-29 13:13:11.641859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.029 [2024-11-29 13:13:11.641920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.029 [2024-11-29 13:13:11.641934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.029 [2024-11-29 13:13:11.641941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.029 [2024-11-29 13:13:11.641951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.029 [2024-11-29 13:13:11.641966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-11-29 13:13:11.651932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.029 [2024-11-29 13:13:11.652014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.029 [2024-11-29 13:13:11.652028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.029 [2024-11-29 13:13:11.652035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.029 [2024-11-29 13:13:11.652041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.029 [2024-11-29 13:13:11.652056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-11-29 13:13:11.661839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.029 [2024-11-29 13:13:11.661889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.029 [2024-11-29 13:13:11.661903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.029 [2024-11-29 13:13:11.661910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.029 [2024-11-29 13:13:11.661916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.029 [2024-11-29 13:13:11.661931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-11-29 13:13:11.671994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.029 [2024-11-29 13:13:11.672060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.029 [2024-11-29 13:13:11.672074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.029 [2024-11-29 13:13:11.672081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.029 [2024-11-29 13:13:11.672087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.029 [2024-11-29 13:13:11.672103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-11-29 13:13:11.681981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.029 [2024-11-29 13:13:11.682038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.029 [2024-11-29 13:13:11.682052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.029 [2024-11-29 13:13:11.682059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.029 [2024-11-29 13:13:11.682065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.029 [2024-11-29 13:13:11.682081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-11-29 13:13:11.692066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.029 [2024-11-29 13:13:11.692174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.029 [2024-11-29 13:13:11.692189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.029 [2024-11-29 13:13:11.692196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.692203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.030 [2024-11-29 13:13:11.692218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-11-29 13:13:11.702021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.030 [2024-11-29 13:13:11.702077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.030 [2024-11-29 13:13:11.702091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.030 [2024-11-29 13:13:11.702098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.702104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.030 [2024-11-29 13:13:11.702119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-11-29 13:13:11.712087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.030 [2024-11-29 13:13:11.712142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.030 [2024-11-29 13:13:11.712156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.030 [2024-11-29 13:13:11.712166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.712172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.030 [2024-11-29 13:13:11.712188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-11-29 13:13:11.722041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.030 [2024-11-29 13:13:11.722099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.030 [2024-11-29 13:13:11.722113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.030 [2024-11-29 13:13:11.722120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.722126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.030 [2024-11-29 13:13:11.722141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-11-29 13:13:11.732127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.030 [2024-11-29 13:13:11.732186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.030 [2024-11-29 13:13:11.732200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.030 [2024-11-29 13:13:11.732207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.732213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.030 [2024-11-29 13:13:11.732229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-11-29 13:13:11.742138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.030 [2024-11-29 13:13:11.742193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.030 [2024-11-29 13:13:11.742207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.030 [2024-11-29 13:13:11.742214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.742219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.030 [2024-11-29 13:13:11.742234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-11-29 13:13:11.752134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.030 [2024-11-29 13:13:11.752234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.030 [2024-11-29 13:13:11.752249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.030 [2024-11-29 13:13:11.752256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.752262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.030 [2024-11-29 13:13:11.752280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-11-29 13:13:11.762214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.030 [2024-11-29 13:13:11.762275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.030 [2024-11-29 13:13:11.762288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.030 [2024-11-29 13:13:11.762295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.762300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.030 [2024-11-29 13:13:11.762316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-11-29 13:13:11.772298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.030 [2024-11-29 13:13:11.772355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.030 [2024-11-29 13:13:11.772368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.030 [2024-11-29 13:13:11.772375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.772380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.030 [2024-11-29 13:13:11.772395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-11-29 13:13:11.782272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.030 [2024-11-29 13:13:11.782328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.030 [2024-11-29 13:13:11.782342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.030 [2024-11-29 13:13:11.782349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.782354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.030 [2024-11-29 13:13:11.782370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-11-29 13:13:11.792304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.030 [2024-11-29 13:13:11.792356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.030 [2024-11-29 13:13:11.792370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.030 [2024-11-29 13:13:11.792376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.792382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.030 [2024-11-29 13:13:11.792397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-11-29 13:13:11.802309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.030 [2024-11-29 13:13:11.802369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.030 [2024-11-29 13:13:11.802382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.030 [2024-11-29 13:13:11.802389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.802394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.030 [2024-11-29 13:13:11.802409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-11-29 13:13:11.812390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.030 [2024-11-29 13:13:11.812497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.030 [2024-11-29 13:13:11.812511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.030 [2024-11-29 13:13:11.812518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.812524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.030 [2024-11-29 13:13:11.812539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-11-29 13:13:11.822388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.030 [2024-11-29 13:13:11.822447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.030 [2024-11-29 13:13:11.822460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.030 [2024-11-29 13:13:11.822467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.030 [2024-11-29 13:13:11.822473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.031 [2024-11-29 13:13:11.822487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-11-29 13:13:11.832389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.031 [2024-11-29 13:13:11.832441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.031 [2024-11-29 13:13:11.832456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.031 [2024-11-29 13:13:11.832462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.031 [2024-11-29 13:13:11.832468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.031 [2024-11-29 13:13:11.832483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-11-29 13:13:11.842430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.031 [2024-11-29 13:13:11.842488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.031 [2024-11-29 13:13:11.842504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.031 [2024-11-29 13:13:11.842511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.031 [2024-11-29 13:13:11.842517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.031 [2024-11-29 13:13:11.842532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-29 13:13:11.852453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-11-29 13:13:11.852513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-11-29 13:13:11.852526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.291 [2024-11-29 13:13:11.852532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.291 [2024-11-29 13:13:11.852538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.291 [2024-11-29 13:13:11.852553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-29 13:13:11.862486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-11-29 13:13:11.862542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-11-29 13:13:11.862556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.291 [2024-11-29 13:13:11.862562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.291 [2024-11-29 13:13:11.862568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.291 [2024-11-29 13:13:11.862583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-29 13:13:11.872509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-11-29 13:13:11.872569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-11-29 13:13:11.872583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.291 [2024-11-29 13:13:11.872590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.291 [2024-11-29 13:13:11.872596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.291 [2024-11-29 13:13:11.872612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-29 13:13:11.882550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-11-29 13:13:11.882612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-11-29 13:13:11.882625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.291 [2024-11-29 13:13:11.882632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.291 [2024-11-29 13:13:11.882641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.291 [2024-11-29 13:13:11.882657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-29 13:13:11.892566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-11-29 13:13:11.892629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-11-29 13:13:11.892666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.291 [2024-11-29 13:13:11.892673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.291 [2024-11-29 13:13:11.892680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.291 [2024-11-29 13:13:11.892704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-29 13:13:11.902589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-11-29 13:13:11.902652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-11-29 13:13:11.902666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.291 [2024-11-29 13:13:11.902674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.291 [2024-11-29 13:13:11.902680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.291 [2024-11-29 13:13:11.902696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-29 13:13:11.912633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-11-29 13:13:11.912684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-11-29 13:13:11.912698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.291 [2024-11-29 13:13:11.912705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.291 [2024-11-29 13:13:11.912711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.291 [2024-11-29 13:13:11.912726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-29 13:13:11.922650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-11-29 13:13:11.922737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-11-29 13:13:11.922752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.291 [2024-11-29 13:13:11.922759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.291 [2024-11-29 13:13:11.922766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.291 [2024-11-29 13:13:11.922781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-29 13:13:11.932680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-11-29 13:13:11.932759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-11-29 13:13:11.932774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.291 [2024-11-29 13:13:11.932781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.291 [2024-11-29 13:13:11.932787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.291 [2024-11-29 13:13:11.932802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.291 qpair failed and we were unable to recover it. 00:29:12.291 [2024-11-29 13:13:11.942711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.291 [2024-11-29 13:13:11.942768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.291 [2024-11-29 13:13:11.942783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:11.942790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-11-29 13:13:11.942796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.292 [2024-11-29 13:13:11.942811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-29 13:13:11.952709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-11-29 13:13:11.952772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-11-29 13:13:11.952786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:11.952793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-11-29 13:13:11.952799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.292 [2024-11-29 13:13:11.952814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-29 13:13:11.962783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-11-29 13:13:11.962843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-11-29 13:13:11.962856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:11.962863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-11-29 13:13:11.962869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.292 [2024-11-29 13:13:11.962884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-29 13:13:11.972812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-11-29 13:13:11.972869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-11-29 13:13:11.972886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:11.972893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-11-29 13:13:11.972899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.292 [2024-11-29 13:13:11.972914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-29 13:13:11.982857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-11-29 13:13:11.982955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-11-29 13:13:11.982970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:11.982976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-11-29 13:13:11.982983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.292 [2024-11-29 13:13:11.982998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-29 13:13:11.992796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-11-29 13:13:11.992856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-11-29 13:13:11.992869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:11.992876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-11-29 13:13:11.992882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.292 [2024-11-29 13:13:11.992897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-29 13:13:12.002957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-11-29 13:13:12.003053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-11-29 13:13:12.003068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:12.003075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-11-29 13:13:12.003081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.292 [2024-11-29 13:13:12.003097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-29 13:13:12.012988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-11-29 13:13:12.013095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-11-29 13:13:12.013110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:12.013116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-11-29 13:13:12.013126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.292 [2024-11-29 13:13:12.013142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-29 13:13:12.022950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-11-29 13:13:12.023003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-11-29 13:13:12.023017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:12.023023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-11-29 13:13:12.023029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.292 [2024-11-29 13:13:12.023044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-29 13:13:12.032943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-11-29 13:13:12.033002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-11-29 13:13:12.033015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:12.033022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-11-29 13:13:12.033028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.292 [2024-11-29 13:13:12.033043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-29 13:13:12.043038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-11-29 13:13:12.043108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-11-29 13:13:12.043122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:12.043129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-11-29 13:13:12.043135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.292 [2024-11-29 13:13:12.043151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-29 13:13:12.053035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-11-29 13:13:12.053095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-11-29 13:13:12.053109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:12.053116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-11-29 13:13:12.053122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.292 [2024-11-29 13:13:12.053138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-29 13:13:12.063067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-11-29 13:13:12.063129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-11-29 13:13:12.063143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:12.063150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.292 [2024-11-29 13:13:12.063155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.292 [2024-11-29 13:13:12.063171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.292 qpair failed and we were unable to recover it. 00:29:12.292 [2024-11-29 13:13:12.073111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.292 [2024-11-29 13:13:12.073167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.292 [2024-11-29 13:13:12.073180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.292 [2024-11-29 13:13:12.073187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.293 [2024-11-29 13:13:12.073192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.293 [2024-11-29 13:13:12.073207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-29 13:13:12.083187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.293 [2024-11-29 13:13:12.083244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.293 [2024-11-29 13:13:12.083257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.293 [2024-11-29 13:13:12.083264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.293 [2024-11-29 13:13:12.083269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.293 [2024-11-29 13:13:12.083284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-29 13:13:12.093185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.293 [2024-11-29 13:13:12.093241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.293 [2024-11-29 13:13:12.093255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.293 [2024-11-29 13:13:12.093261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.293 [2024-11-29 13:13:12.093267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.293 [2024-11-29 13:13:12.093282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.293 [2024-11-29 13:13:12.103186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.293 [2024-11-29 13:13:12.103246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.293 [2024-11-29 13:13:12.103260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.293 [2024-11-29 13:13:12.103267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.293 [2024-11-29 13:13:12.103273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.293 [2024-11-29 13:13:12.103288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.293 qpair failed and we were unable to recover it. 00:29:12.553 [2024-11-29 13:13:12.113269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.553 [2024-11-29 13:13:12.113321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.553 [2024-11-29 13:13:12.113335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.553 [2024-11-29 13:13:12.113341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.553 [2024-11-29 13:13:12.113347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.553 [2024-11-29 13:13:12.113362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-11-29 13:13:12.123233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.553 [2024-11-29 13:13:12.123322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.553 [2024-11-29 13:13:12.123337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.553 [2024-11-29 13:13:12.123344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.553 [2024-11-29 13:13:12.123350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.553 [2024-11-29 13:13:12.123365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-11-29 13:13:12.133272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.553 [2024-11-29 13:13:12.133333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.553 [2024-11-29 13:13:12.133346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.553 [2024-11-29 13:13:12.133353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.553 [2024-11-29 13:13:12.133359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.553 [2024-11-29 13:13:12.133374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-11-29 13:13:12.143298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.553 [2024-11-29 13:13:12.143355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.553 [2024-11-29 13:13:12.143368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.553 [2024-11-29 13:13:12.143378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.553 [2024-11-29 13:13:12.143384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.553 [2024-11-29 13:13:12.143399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-11-29 13:13:12.153327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.553 [2024-11-29 13:13:12.153405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.553 [2024-11-29 13:13:12.153420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.553 [2024-11-29 13:13:12.153427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.553 [2024-11-29 13:13:12.153433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.553 [2024-11-29 13:13:12.153448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.553 qpair failed and we were unable to recover it. 00:29:12.553 [2024-11-29 13:13:12.163369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.554 [2024-11-29 13:13:12.163433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.554 [2024-11-29 13:13:12.163446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.554 [2024-11-29 13:13:12.163453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.554 [2024-11-29 13:13:12.163459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.554 [2024-11-29 13:13:12.163474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-11-29 13:13:12.173388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.554 [2024-11-29 13:13:12.173445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.554 [2024-11-29 13:13:12.173458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.554 [2024-11-29 13:13:12.173465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.554 [2024-11-29 13:13:12.173471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.554 [2024-11-29 13:13:12.173486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-11-29 13:13:12.183420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.554 [2024-11-29 13:13:12.183477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.554 [2024-11-29 13:13:12.183491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.554 [2024-11-29 13:13:12.183497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.554 [2024-11-29 13:13:12.183503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.554 [2024-11-29 13:13:12.183521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-11-29 13:13:12.193446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.554 [2024-11-29 13:13:12.193504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.554 [2024-11-29 13:13:12.193517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.554 [2024-11-29 13:13:12.193524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.554 [2024-11-29 13:13:12.193530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.554 [2024-11-29 13:13:12.193544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-11-29 13:13:12.203480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.554 [2024-11-29 13:13:12.203540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.554 [2024-11-29 13:13:12.203555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.554 [2024-11-29 13:13:12.203561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.554 [2024-11-29 13:13:12.203567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.554 [2024-11-29 13:13:12.203582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-11-29 13:13:12.213435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.554 [2024-11-29 13:13:12.213493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.554 [2024-11-29 13:13:12.213507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.554 [2024-11-29 13:13:12.213514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.554 [2024-11-29 13:13:12.213519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.554 [2024-11-29 13:13:12.213534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-11-29 13:13:12.223525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.554 [2024-11-29 13:13:12.223584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.554 [2024-11-29 13:13:12.223598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.554 [2024-11-29 13:13:12.223604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.554 [2024-11-29 13:13:12.223611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.554 [2024-11-29 13:13:12.223626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-11-29 13:13:12.233561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.554 [2024-11-29 13:13:12.233623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.554 [2024-11-29 13:13:12.233637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.554 [2024-11-29 13:13:12.233644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.554 [2024-11-29 13:13:12.233651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.554 [2024-11-29 13:13:12.233666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-11-29 13:13:12.243631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.554 [2024-11-29 13:13:12.243738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.554 [2024-11-29 13:13:12.243753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.554 [2024-11-29 13:13:12.243760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.554 [2024-11-29 13:13:12.243766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.554 [2024-11-29 13:13:12.243781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-11-29 13:13:12.253610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.554 [2024-11-29 13:13:12.253667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.554 [2024-11-29 13:13:12.253681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.554 [2024-11-29 13:13:12.253687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.554 [2024-11-29 13:13:12.253694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.554 [2024-11-29 13:13:12.253710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-11-29 13:13:12.263645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.554 [2024-11-29 13:13:12.263710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.554 [2024-11-29 13:13:12.263724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.554 [2024-11-29 13:13:12.263731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.554 [2024-11-29 13:13:12.263737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.554 [2024-11-29 13:13:12.263752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-11-29 13:13:12.273706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.554 [2024-11-29 13:13:12.273810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.554 [2024-11-29 13:13:12.273825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.554 [2024-11-29 13:13:12.273834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.554 [2024-11-29 13:13:12.273840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.554 [2024-11-29 13:13:12.273855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-11-29 13:13:12.283684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.554 [2024-11-29 13:13:12.283741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.554 [2024-11-29 13:13:12.283755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.554 [2024-11-29 13:13:12.283761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.554 [2024-11-29 13:13:12.283767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.554 [2024-11-29 13:13:12.283782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.554 qpair failed and we were unable to recover it. 00:29:12.554 [2024-11-29 13:13:12.293656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.555 [2024-11-29 13:13:12.293713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.555 [2024-11-29 13:13:12.293727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.555 [2024-11-29 13:13:12.293734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.555 [2024-11-29 13:13:12.293740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.555 [2024-11-29 13:13:12.293755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-11-29 13:13:12.303814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.555 [2024-11-29 13:13:12.303870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.555 [2024-11-29 13:13:12.303884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.555 [2024-11-29 13:13:12.303891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.555 [2024-11-29 13:13:12.303897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.555 [2024-11-29 13:13:12.303912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-11-29 13:13:12.313840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.555 [2024-11-29 13:13:12.313900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.555 [2024-11-29 13:13:12.313914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.555 [2024-11-29 13:13:12.313921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.555 [2024-11-29 13:13:12.313927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.555 [2024-11-29 13:13:12.313953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-11-29 13:13:12.323834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.555 [2024-11-29 13:13:12.323895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.555 [2024-11-29 13:13:12.323910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.555 [2024-11-29 13:13:12.323917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.555 [2024-11-29 13:13:12.323923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.555 [2024-11-29 13:13:12.323939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-11-29 13:13:12.333833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.555 [2024-11-29 13:13:12.333898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.555 [2024-11-29 13:13:12.333912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.555 [2024-11-29 13:13:12.333919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.555 [2024-11-29 13:13:12.333925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.555 [2024-11-29 13:13:12.333940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-11-29 13:13:12.343903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.555 [2024-11-29 13:13:12.343963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.555 [2024-11-29 13:13:12.343977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.555 [2024-11-29 13:13:12.343984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.555 [2024-11-29 13:13:12.343990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.555 [2024-11-29 13:13:12.344005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-11-29 13:13:12.353908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.555 [2024-11-29 13:13:12.353965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.555 [2024-11-29 13:13:12.353979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.555 [2024-11-29 13:13:12.353986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.555 [2024-11-29 13:13:12.353991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.555 [2024-11-29 13:13:12.354006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.555 [2024-11-29 13:13:12.363943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.555 [2024-11-29 13:13:12.364038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.555 [2024-11-29 13:13:12.364053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.555 [2024-11-29 13:13:12.364060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.555 [2024-11-29 13:13:12.364066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.555 [2024-11-29 13:13:12.364081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.555 qpair failed and we were unable to recover it. 00:29:12.815 [2024-11-29 13:13:12.374023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.815 [2024-11-29 13:13:12.374118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.815 [2024-11-29 13:13:12.374133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.815 [2024-11-29 13:13:12.374140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.815 [2024-11-29 13:13:12.374146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.815 [2024-11-29 13:13:12.374161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.815 qpair failed and we were unable to recover it. 00:29:12.815 [2024-11-29 13:13:12.383997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.815 [2024-11-29 13:13:12.384051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.815 [2024-11-29 13:13:12.384065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.815 [2024-11-29 13:13:12.384071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.815 [2024-11-29 13:13:12.384077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.815 [2024-11-29 13:13:12.384091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.815 qpair failed and we were unable to recover it. 00:29:12.815 [2024-11-29 13:13:12.394029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.815 [2024-11-29 13:13:12.394086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.815 [2024-11-29 13:13:12.394099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.815 [2024-11-29 13:13:12.394106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.815 [2024-11-29 13:13:12.394112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.815 [2024-11-29 13:13:12.394127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.815 qpair failed and we were unable to recover it. 00:29:12.815 [2024-11-29 13:13:12.404053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.404113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.816 [2024-11-29 13:13:12.404130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.816 [2024-11-29 13:13:12.404136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.816 [2024-11-29 13:13:12.404142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.816 [2024-11-29 13:13:12.404157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.816 qpair failed and we were unable to recover it. 00:29:12.816 [2024-11-29 13:13:12.414101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.414153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.816 [2024-11-29 13:13:12.414166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.816 [2024-11-29 13:13:12.414173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.816 [2024-11-29 13:13:12.414179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.816 [2024-11-29 13:13:12.414193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.816 qpair failed and we were unable to recover it. 00:29:12.816 [2024-11-29 13:13:12.424126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.424183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.816 [2024-11-29 13:13:12.424197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.816 [2024-11-29 13:13:12.424204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.816 [2024-11-29 13:13:12.424210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.816 [2024-11-29 13:13:12.424225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.816 qpair failed and we were unable to recover it. 00:29:12.816 [2024-11-29 13:13:12.434175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.434237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.816 [2024-11-29 13:13:12.434251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.816 [2024-11-29 13:13:12.434257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.816 [2024-11-29 13:13:12.434264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.816 [2024-11-29 13:13:12.434279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.816 qpair failed and we were unable to recover it. 00:29:12.816 [2024-11-29 13:13:12.444186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.444248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.816 [2024-11-29 13:13:12.444262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.816 [2024-11-29 13:13:12.444268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.816 [2024-11-29 13:13:12.444278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.816 [2024-11-29 13:13:12.444292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.816 qpair failed and we were unable to recover it. 00:29:12.816 [2024-11-29 13:13:12.454202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.454272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.816 [2024-11-29 13:13:12.454286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.816 [2024-11-29 13:13:12.454293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.816 [2024-11-29 13:13:12.454300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.816 [2024-11-29 13:13:12.454315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.816 qpair failed and we were unable to recover it. 00:29:12.816 [2024-11-29 13:13:12.464238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.464295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.816 [2024-11-29 13:13:12.464308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.816 [2024-11-29 13:13:12.464315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.816 [2024-11-29 13:13:12.464321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.816 [2024-11-29 13:13:12.464336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.816 qpair failed and we were unable to recover it. 00:29:12.816 [2024-11-29 13:13:12.474284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.474364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.816 [2024-11-29 13:13:12.474379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.816 [2024-11-29 13:13:12.474386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.816 [2024-11-29 13:13:12.474392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.816 [2024-11-29 13:13:12.474407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.816 qpair failed and we were unable to recover it. 00:29:12.816 [2024-11-29 13:13:12.484287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.484355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.816 [2024-11-29 13:13:12.484368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.816 [2024-11-29 13:13:12.484375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.816 [2024-11-29 13:13:12.484381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.816 [2024-11-29 13:13:12.484396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.816 qpair failed and we were unable to recover it. 00:29:12.816 [2024-11-29 13:13:12.494250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.494307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.816 [2024-11-29 13:13:12.494321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.816 [2024-11-29 13:13:12.494329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.816 [2024-11-29 13:13:12.494335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.816 [2024-11-29 13:13:12.494350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.816 qpair failed and we were unable to recover it. 00:29:12.816 [2024-11-29 13:13:12.504330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.504388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.816 [2024-11-29 13:13:12.504402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.816 [2024-11-29 13:13:12.504409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.816 [2024-11-29 13:13:12.504415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.816 [2024-11-29 13:13:12.504430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.816 qpair failed and we were unable to recover it. 00:29:12.816 [2024-11-29 13:13:12.514365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.514420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.816 [2024-11-29 13:13:12.514434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.816 [2024-11-29 13:13:12.514441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.816 [2024-11-29 13:13:12.514447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.816 [2024-11-29 13:13:12.514462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.816 qpair failed and we were unable to recover it. 00:29:12.816 [2024-11-29 13:13:12.524446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.524552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.816 [2024-11-29 13:13:12.524566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.816 [2024-11-29 13:13:12.524573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.816 [2024-11-29 13:13:12.524579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.816 [2024-11-29 13:13:12.524595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.816 qpair failed and we were unable to recover it. 00:29:12.816 [2024-11-29 13:13:12.534433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.816 [2024-11-29 13:13:12.534526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.817 [2024-11-29 13:13:12.534544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.817 [2024-11-29 13:13:12.534551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.817 [2024-11-29 13:13:12.534557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.817 [2024-11-29 13:13:12.534573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.817 qpair failed and we were unable to recover it. 00:29:12.817 [2024-11-29 13:13:12.544450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.817 [2024-11-29 13:13:12.544505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.817 [2024-11-29 13:13:12.544519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.817 [2024-11-29 13:13:12.544525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.817 [2024-11-29 13:13:12.544532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.817 [2024-11-29 13:13:12.544546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.817 qpair failed and we were unable to recover it. 00:29:12.817 [2024-11-29 13:13:12.554475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.817 [2024-11-29 13:13:12.554566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.817 [2024-11-29 13:13:12.554581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.817 [2024-11-29 13:13:12.554588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.817 [2024-11-29 13:13:12.554594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.817 [2024-11-29 13:13:12.554609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.817 qpair failed and we were unable to recover it. 00:29:12.817 [2024-11-29 13:13:12.564555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.817 [2024-11-29 13:13:12.564612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.817 [2024-11-29 13:13:12.564626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.817 [2024-11-29 13:13:12.564632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.817 [2024-11-29 13:13:12.564638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.817 [2024-11-29 13:13:12.564653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.817 qpair failed and we were unable to recover it. 00:29:12.817 [2024-11-29 13:13:12.574537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.817 [2024-11-29 13:13:12.574594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.817 [2024-11-29 13:13:12.574608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.817 [2024-11-29 13:13:12.574615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.817 [2024-11-29 13:13:12.574625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.817 [2024-11-29 13:13:12.574640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.817 qpair failed and we were unable to recover it. 00:29:12.817 [2024-11-29 13:13:12.584579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.817 [2024-11-29 13:13:12.584633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.817 [2024-11-29 13:13:12.584646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.817 [2024-11-29 13:13:12.584653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.817 [2024-11-29 13:13:12.584658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.817 [2024-11-29 13:13:12.584674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.817 qpair failed and we were unable to recover it. 00:29:12.817 [2024-11-29 13:13:12.594598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.817 [2024-11-29 13:13:12.594651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.817 [2024-11-29 13:13:12.594664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.817 [2024-11-29 13:13:12.594671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.817 [2024-11-29 13:13:12.594677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.817 [2024-11-29 13:13:12.594691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.817 qpair failed and we were unable to recover it. 00:29:12.817 [2024-11-29 13:13:12.604636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.817 [2024-11-29 13:13:12.604695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.817 [2024-11-29 13:13:12.604709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.817 [2024-11-29 13:13:12.604716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.817 [2024-11-29 13:13:12.604722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.817 [2024-11-29 13:13:12.604737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.817 qpair failed and we were unable to recover it. 00:29:12.817 [2024-11-29 13:13:12.614653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.817 [2024-11-29 13:13:12.614711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.817 [2024-11-29 13:13:12.614724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.817 [2024-11-29 13:13:12.614731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.817 [2024-11-29 13:13:12.614737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.817 [2024-11-29 13:13:12.614752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.817 qpair failed and we were unable to recover it. 00:29:12.817 [2024-11-29 13:13:12.624672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.817 [2024-11-29 13:13:12.624727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.817 [2024-11-29 13:13:12.624742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.817 [2024-11-29 13:13:12.624748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.817 [2024-11-29 13:13:12.624755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:12.817 [2024-11-29 13:13:12.624770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.817 qpair failed and we were unable to recover it. 00:29:13.077 [2024-11-29 13:13:12.634694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.078 [2024-11-29 13:13:12.634752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.078 [2024-11-29 13:13:12.634766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.078 [2024-11-29 13:13:12.634773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.078 [2024-11-29 13:13:12.634781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.078 [2024-11-29 13:13:12.634797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.078 qpair failed and we were unable to recover it. 00:29:13.078 [2024-11-29 13:13:12.644781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.078 [2024-11-29 13:13:12.644840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.078 [2024-11-29 13:13:12.644855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.078 [2024-11-29 13:13:12.644862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.078 [2024-11-29 13:13:12.644868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.078 [2024-11-29 13:13:12.644882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.078 qpair failed and we were unable to recover it. 00:29:13.078 [2024-11-29 13:13:12.654767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.078 [2024-11-29 13:13:12.654825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.078 [2024-11-29 13:13:12.654839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.078 [2024-11-29 13:13:12.654846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.078 [2024-11-29 13:13:12.654852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.078 [2024-11-29 13:13:12.654867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.078 qpair failed and we were unable to recover it. 00:29:13.078 [2024-11-29 13:13:12.664765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.078 [2024-11-29 13:13:12.664849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.078 [2024-11-29 13:13:12.664864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.078 [2024-11-29 13:13:12.664871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.078 [2024-11-29 13:13:12.664877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.078 [2024-11-29 13:13:12.664892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.078 qpair failed and we were unable to recover it. 00:29:13.078 [2024-11-29 13:13:12.674851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.078 [2024-11-29 13:13:12.674966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.078 [2024-11-29 13:13:12.674980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.078 [2024-11-29 13:13:12.674988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.078 [2024-11-29 13:13:12.674994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.078 [2024-11-29 13:13:12.675009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.078 qpair failed and we were unable to recover it. 00:29:13.078 [2024-11-29 13:13:12.684853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.078 [2024-11-29 13:13:12.684921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.078 [2024-11-29 13:13:12.684935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.078 [2024-11-29 13:13:12.684944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.078 [2024-11-29 13:13:12.684955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.078 [2024-11-29 13:13:12.684970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.078 qpair failed and we were unable to recover it. 00:29:13.078 [2024-11-29 13:13:12.694878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.078 [2024-11-29 13:13:12.694939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.078 [2024-11-29 13:13:12.694962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.078 [2024-11-29 13:13:12.694969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.078 [2024-11-29 13:13:12.694975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.078 [2024-11-29 13:13:12.694992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.078 qpair failed and we were unable to recover it. 00:29:13.078 [2024-11-29 13:13:12.704881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.078 [2024-11-29 13:13:12.704934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.078 [2024-11-29 13:13:12.704953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.078 [2024-11-29 13:13:12.704963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.078 [2024-11-29 13:13:12.704969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.078 [2024-11-29 13:13:12.704984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.078 qpair failed and we were unable to recover it. 00:29:13.078 [2024-11-29 13:13:12.714927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.078 [2024-11-29 13:13:12.714984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.078 [2024-11-29 13:13:12.714998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.078 [2024-11-29 13:13:12.715005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.078 [2024-11-29 13:13:12.715011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.078 [2024-11-29 13:13:12.715026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.078 qpair failed and we were unable to recover it. 00:29:13.078 [2024-11-29 13:13:12.724980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.078 [2024-11-29 13:13:12.725041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.078 [2024-11-29 13:13:12.725055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.078 [2024-11-29 13:13:12.725061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.078 [2024-11-29 13:13:12.725067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.078 [2024-11-29 13:13:12.725083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.078 qpair failed and we were unable to recover it. 00:29:13.078 [2024-11-29 13:13:12.735024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.078 [2024-11-29 13:13:12.735123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.078 [2024-11-29 13:13:12.735138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.078 [2024-11-29 13:13:12.735145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.078 [2024-11-29 13:13:12.735151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.078 [2024-11-29 13:13:12.735166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.078 qpair failed and we were unable to recover it. 00:29:13.078 [2024-11-29 13:13:12.745138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.078 [2024-11-29 13:13:12.745214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.078 [2024-11-29 13:13:12.745228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.078 [2024-11-29 13:13:12.745235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.078 [2024-11-29 13:13:12.745241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.078 [2024-11-29 13:13:12.745263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.078 qpair failed and we were unable to recover it. 00:29:13.078 [2024-11-29 13:13:12.755022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.078 [2024-11-29 13:13:12.755080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.078 [2024-11-29 13:13:12.755095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.078 [2024-11-29 13:13:12.755101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.078 [2024-11-29 13:13:12.755107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.078 [2024-11-29 13:13:12.755123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.078 qpair failed and we were unable to recover it. 00:29:13.078 [2024-11-29 13:13:12.765050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.765156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.079 [2024-11-29 13:13:12.765171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.079 [2024-11-29 13:13:12.765178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.079 [2024-11-29 13:13:12.765184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.079 [2024-11-29 13:13:12.765200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.079 qpair failed and we were unable to recover it. 00:29:13.079 [2024-11-29 13:13:12.775160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.775221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.079 [2024-11-29 13:13:12.775235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.079 [2024-11-29 13:13:12.775243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.079 [2024-11-29 13:13:12.775249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.079 [2024-11-29 13:13:12.775264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.079 qpair failed and we were unable to recover it. 00:29:13.079 [2024-11-29 13:13:12.785121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.785182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.079 [2024-11-29 13:13:12.785196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.079 [2024-11-29 13:13:12.785203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.079 [2024-11-29 13:13:12.785208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.079 [2024-11-29 13:13:12.785224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.079 qpair failed and we were unable to recover it. 00:29:13.079 [2024-11-29 13:13:12.795166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.795224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.079 [2024-11-29 13:13:12.795238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.079 [2024-11-29 13:13:12.795245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.079 [2024-11-29 13:13:12.795250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.079 [2024-11-29 13:13:12.795266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.079 qpair failed and we were unable to recover it. 00:29:13.079 [2024-11-29 13:13:12.805146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.805227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.079 [2024-11-29 13:13:12.805242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.079 [2024-11-29 13:13:12.805249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.079 [2024-11-29 13:13:12.805255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.079 [2024-11-29 13:13:12.805270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.079 qpair failed and we were unable to recover it. 00:29:13.079 [2024-11-29 13:13:12.815267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.815328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.079 [2024-11-29 13:13:12.815341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.079 [2024-11-29 13:13:12.815348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.079 [2024-11-29 13:13:12.815353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.079 [2024-11-29 13:13:12.815369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.079 qpair failed and we were unable to recover it. 00:29:13.079 [2024-11-29 13:13:12.825244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.825300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.079 [2024-11-29 13:13:12.825314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.079 [2024-11-29 13:13:12.825321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.079 [2024-11-29 13:13:12.825327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.079 [2024-11-29 13:13:12.825341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.079 qpair failed and we were unable to recover it. 00:29:13.079 [2024-11-29 13:13:12.835283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.835335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.079 [2024-11-29 13:13:12.835349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.079 [2024-11-29 13:13:12.835358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.079 [2024-11-29 13:13:12.835364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.079 [2024-11-29 13:13:12.835378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.079 qpair failed and we were unable to recover it. 00:29:13.079 [2024-11-29 13:13:12.845252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.845310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.079 [2024-11-29 13:13:12.845324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.079 [2024-11-29 13:13:12.845331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.079 [2024-11-29 13:13:12.845337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.079 [2024-11-29 13:13:12.845352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.079 qpair failed and we were unable to recover it. 00:29:13.079 [2024-11-29 13:13:12.855275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.855336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.079 [2024-11-29 13:13:12.855349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.079 [2024-11-29 13:13:12.855356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.079 [2024-11-29 13:13:12.855362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.079 [2024-11-29 13:13:12.855377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.079 qpair failed and we were unable to recover it. 00:29:13.079 [2024-11-29 13:13:12.865388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.865469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.079 [2024-11-29 13:13:12.865484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.079 [2024-11-29 13:13:12.865490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.079 [2024-11-29 13:13:12.865497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.079 [2024-11-29 13:13:12.865512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.079 qpair failed and we were unable to recover it. 00:29:13.079 [2024-11-29 13:13:12.875327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.875383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.079 [2024-11-29 13:13:12.875397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.079 [2024-11-29 13:13:12.875404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.079 [2024-11-29 13:13:12.875410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.079 [2024-11-29 13:13:12.875428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.079 qpair failed and we were unable to recover it. 00:29:13.079 [2024-11-29 13:13:12.885439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.885500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.079 [2024-11-29 13:13:12.885513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.079 [2024-11-29 13:13:12.885520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.079 [2024-11-29 13:13:12.885526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.079 [2024-11-29 13:13:12.885541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.079 qpair failed and we were unable to recover it. 00:29:13.079 [2024-11-29 13:13:12.895442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.079 [2024-11-29 13:13:12.895504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.080 [2024-11-29 13:13:12.895518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.080 [2024-11-29 13:13:12.895524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.080 [2024-11-29 13:13:12.895531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.080 [2024-11-29 13:13:12.895546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.080 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-29 13:13:12.905489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.340 [2024-11-29 13:13:12.905551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.340 [2024-11-29 13:13:12.905565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.340 [2024-11-29 13:13:12.905572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.340 [2024-11-29 13:13:12.905578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.340 [2024-11-29 13:13:12.905593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-29 13:13:12.915578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.340 [2024-11-29 13:13:12.915667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.340 [2024-11-29 13:13:12.915681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.340 [2024-11-29 13:13:12.915688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.340 [2024-11-29 13:13:12.915695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.340 [2024-11-29 13:13:12.915709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-29 13:13:12.925596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.340 [2024-11-29 13:13:12.925673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.340 [2024-11-29 13:13:12.925688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.340 [2024-11-29 13:13:12.925695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.340 [2024-11-29 13:13:12.925701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.340 [2024-11-29 13:13:12.925716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-29 13:13:12.935579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.340 [2024-11-29 13:13:12.935631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.340 [2024-11-29 13:13:12.935646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.340 [2024-11-29 13:13:12.935652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.340 [2024-11-29 13:13:12.935658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.340 [2024-11-29 13:13:12.935673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-29 13:13:12.945573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.340 [2024-11-29 13:13:12.945635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.340 [2024-11-29 13:13:12.945649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.340 [2024-11-29 13:13:12.945656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.340 [2024-11-29 13:13:12.945662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.340 [2024-11-29 13:13:12.945677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-29 13:13:12.955550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.340 [2024-11-29 13:13:12.955655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.340 [2024-11-29 13:13:12.955668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.340 [2024-11-29 13:13:12.955675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.340 [2024-11-29 13:13:12.955681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.340 [2024-11-29 13:13:12.955696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-29 13:13:12.965688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.340 [2024-11-29 13:13:12.965749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.340 [2024-11-29 13:13:12.965766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.340 [2024-11-29 13:13:12.965773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.340 [2024-11-29 13:13:12.965779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.340 [2024-11-29 13:13:12.965795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-29 13:13:12.975723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.340 [2024-11-29 13:13:12.975781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.340 [2024-11-29 13:13:12.975795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.340 [2024-11-29 13:13:12.975802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.340 [2024-11-29 13:13:12.975808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.340 [2024-11-29 13:13:12.975822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-29 13:13:12.985725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.340 [2024-11-29 13:13:12.985783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.340 [2024-11-29 13:13:12.985797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.340 [2024-11-29 13:13:12.985804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.340 [2024-11-29 13:13:12.985810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.340 [2024-11-29 13:13:12.985825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-29 13:13:12.995734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.340 [2024-11-29 13:13:12.995792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.340 [2024-11-29 13:13:12.995806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.340 [2024-11-29 13:13:12.995813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.340 [2024-11-29 13:13:12.995820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.340 [2024-11-29 13:13:12.995834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-29 13:13:13.005760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.340 [2024-11-29 13:13:13.005817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.340 [2024-11-29 13:13:13.005831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.340 [2024-11-29 13:13:13.005837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.005847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.341 [2024-11-29 13:13:13.005863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.341 qpair failed and we were unable to recover it. 00:29:13.341 [2024-11-29 13:13:13.015791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.341 [2024-11-29 13:13:13.015880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.341 [2024-11-29 13:13:13.015894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.341 [2024-11-29 13:13:13.015901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.015907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.341 [2024-11-29 13:13:13.015923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.341 qpair failed and we were unable to recover it. 00:29:13.341 [2024-11-29 13:13:13.025769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.341 [2024-11-29 13:13:13.025825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.341 [2024-11-29 13:13:13.025839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.341 [2024-11-29 13:13:13.025846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.025852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.341 [2024-11-29 13:13:13.025867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.341 qpair failed and we were unable to recover it. 00:29:13.341 [2024-11-29 13:13:13.035854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.341 [2024-11-29 13:13:13.035913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.341 [2024-11-29 13:13:13.035927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.341 [2024-11-29 13:13:13.035933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.035939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.341 [2024-11-29 13:13:13.035960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.341 qpair failed and we were unable to recover it. 00:29:13.341 [2024-11-29 13:13:13.045930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.341 [2024-11-29 13:13:13.045994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.341 [2024-11-29 13:13:13.046008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.341 [2024-11-29 13:13:13.046015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.046021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.341 [2024-11-29 13:13:13.046036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.341 qpair failed and we were unable to recover it. 00:29:13.341 [2024-11-29 13:13:13.055882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.341 [2024-11-29 13:13:13.055940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.341 [2024-11-29 13:13:13.055960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.341 [2024-11-29 13:13:13.055966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.055972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.341 [2024-11-29 13:13:13.055987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.341 qpair failed and we were unable to recover it. 00:29:13.341 [2024-11-29 13:13:13.065958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.341 [2024-11-29 13:13:13.066016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.341 [2024-11-29 13:13:13.066030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.341 [2024-11-29 13:13:13.066037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.066043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.341 [2024-11-29 13:13:13.066059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.341 qpair failed and we were unable to recover it. 00:29:13.341 [2024-11-29 13:13:13.075959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.341 [2024-11-29 13:13:13.076018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.341 [2024-11-29 13:13:13.076033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.341 [2024-11-29 13:13:13.076039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.076045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.341 [2024-11-29 13:13:13.076060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.341 qpair failed and we were unable to recover it. 00:29:13.341 [2024-11-29 13:13:13.086046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.341 [2024-11-29 13:13:13.086106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.341 [2024-11-29 13:13:13.086120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.341 [2024-11-29 13:13:13.086127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.086133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.341 [2024-11-29 13:13:13.086148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.341 qpair failed and we were unable to recover it. 00:29:13.341 [2024-11-29 13:13:13.096012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.341 [2024-11-29 13:13:13.096071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.341 [2024-11-29 13:13:13.096088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.341 [2024-11-29 13:13:13.096095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.096101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.341 [2024-11-29 13:13:13.096116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.341 qpair failed and we were unable to recover it. 00:29:13.341 [2024-11-29 13:13:13.106054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.341 [2024-11-29 13:13:13.106109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.341 [2024-11-29 13:13:13.106123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.341 [2024-11-29 13:13:13.106130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.106135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.341 [2024-11-29 13:13:13.106150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.341 qpair failed and we were unable to recover it. 00:29:13.341 [2024-11-29 13:13:13.116125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.341 [2024-11-29 13:13:13.116183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.341 [2024-11-29 13:13:13.116198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.341 [2024-11-29 13:13:13.116204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.116210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.341 [2024-11-29 13:13:13.116225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.341 qpair failed and we were unable to recover it. 00:29:13.341 [2024-11-29 13:13:13.126162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.341 [2024-11-29 13:13:13.126221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.341 [2024-11-29 13:13:13.126235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.341 [2024-11-29 13:13:13.126241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.126247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.341 [2024-11-29 13:13:13.126262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.341 qpair failed and we were unable to recover it. 00:29:13.341 [2024-11-29 13:13:13.136151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.341 [2024-11-29 13:13:13.136212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.341 [2024-11-29 13:13:13.136226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.341 [2024-11-29 13:13:13.136232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.341 [2024-11-29 13:13:13.136241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.342 [2024-11-29 13:13:13.136256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.342 qpair failed and we were unable to recover it. 00:29:13.342 [2024-11-29 13:13:13.146178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.342 [2024-11-29 13:13:13.146235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.342 [2024-11-29 13:13:13.146249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.342 [2024-11-29 13:13:13.146256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.342 [2024-11-29 13:13:13.146261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.342 [2024-11-29 13:13:13.146276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.342 qpair failed and we were unable to recover it. 00:29:13.342 [2024-11-29 13:13:13.156193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.342 [2024-11-29 13:13:13.156250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.342 [2024-11-29 13:13:13.156264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.342 [2024-11-29 13:13:13.156271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.342 [2024-11-29 13:13:13.156276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.342 [2024-11-29 13:13:13.156292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.342 qpair failed and we were unable to recover it. 00:29:13.603 [2024-11-29 13:13:13.166230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.603 [2024-11-29 13:13:13.166284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.603 [2024-11-29 13:13:13.166298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.603 [2024-11-29 13:13:13.166304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.603 [2024-11-29 13:13:13.166310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.603 [2024-11-29 13:13:13.166325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-11-29 13:13:13.176270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.603 [2024-11-29 13:13:13.176326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.603 [2024-11-29 13:13:13.176339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.603 [2024-11-29 13:13:13.176346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.603 [2024-11-29 13:13:13.176351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.603 [2024-11-29 13:13:13.176366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-11-29 13:13:13.186281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.603 [2024-11-29 13:13:13.186338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.603 [2024-11-29 13:13:13.186351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.603 [2024-11-29 13:13:13.186358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.603 [2024-11-29 13:13:13.186364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.603 [2024-11-29 13:13:13.186378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-11-29 13:13:13.196300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.603 [2024-11-29 13:13:13.196359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.603 [2024-11-29 13:13:13.196373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.603 [2024-11-29 13:13:13.196379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.603 [2024-11-29 13:13:13.196385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.603 [2024-11-29 13:13:13.196400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-11-29 13:13:13.206358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.603 [2024-11-29 13:13:13.206419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.603 [2024-11-29 13:13:13.206432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.603 [2024-11-29 13:13:13.206439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.603 [2024-11-29 13:13:13.206445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.603 [2024-11-29 13:13:13.206459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-11-29 13:13:13.216376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.603 [2024-11-29 13:13:13.216438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.603 [2024-11-29 13:13:13.216451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.603 [2024-11-29 13:13:13.216458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.603 [2024-11-29 13:13:13.216464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.603 [2024-11-29 13:13:13.216479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-11-29 13:13:13.226389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.603 [2024-11-29 13:13:13.226463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.603 [2024-11-29 13:13:13.226480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.603 [2024-11-29 13:13:13.226487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.603 [2024-11-29 13:13:13.226493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.603 [2024-11-29 13:13:13.226508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-11-29 13:13:13.236427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.603 [2024-11-29 13:13:13.236483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.603 [2024-11-29 13:13:13.236497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.603 [2024-11-29 13:13:13.236504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.603 [2024-11-29 13:13:13.236510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.603 [2024-11-29 13:13:13.236525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.603 qpair failed and we were unable to recover it. 00:29:13.603 [2024-11-29 13:13:13.246452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.603 [2024-11-29 13:13:13.246516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.603 [2024-11-29 13:13:13.246529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.603 [2024-11-29 13:13:13.246536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.246542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.604 [2024-11-29 13:13:13.246557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-11-29 13:13:13.256482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.604 [2024-11-29 13:13:13.256556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.604 [2024-11-29 13:13:13.256571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.604 [2024-11-29 13:13:13.256578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.256584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.604 [2024-11-29 13:13:13.256599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-11-29 13:13:13.266515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.604 [2024-11-29 13:13:13.266601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.604 [2024-11-29 13:13:13.266615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.604 [2024-11-29 13:13:13.266625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.266631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.604 [2024-11-29 13:13:13.266646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-11-29 13:13:13.276536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.604 [2024-11-29 13:13:13.276593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.604 [2024-11-29 13:13:13.276607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.604 [2024-11-29 13:13:13.276613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.276619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.604 [2024-11-29 13:13:13.276633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-11-29 13:13:13.286581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.604 [2024-11-29 13:13:13.286637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.604 [2024-11-29 13:13:13.286651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.604 [2024-11-29 13:13:13.286658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.286664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.604 [2024-11-29 13:13:13.286678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-11-29 13:13:13.296589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.604 [2024-11-29 13:13:13.296643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.604 [2024-11-29 13:13:13.296656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.604 [2024-11-29 13:13:13.296663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.296668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.604 [2024-11-29 13:13:13.296683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-11-29 13:13:13.306566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.604 [2024-11-29 13:13:13.306655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.604 [2024-11-29 13:13:13.306669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.604 [2024-11-29 13:13:13.306676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.306682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.604 [2024-11-29 13:13:13.306700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-11-29 13:13:13.316651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.604 [2024-11-29 13:13:13.316704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.604 [2024-11-29 13:13:13.316718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.604 [2024-11-29 13:13:13.316725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.316730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.604 [2024-11-29 13:13:13.316745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-11-29 13:13:13.326724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.604 [2024-11-29 13:13:13.326803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.604 [2024-11-29 13:13:13.326818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.604 [2024-11-29 13:13:13.326825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.326831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.604 [2024-11-29 13:13:13.326846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-11-29 13:13:13.336699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.604 [2024-11-29 13:13:13.336755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.604 [2024-11-29 13:13:13.336769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.604 [2024-11-29 13:13:13.336776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.336782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.604 [2024-11-29 13:13:13.336797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-11-29 13:13:13.346662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.604 [2024-11-29 13:13:13.346717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.604 [2024-11-29 13:13:13.346730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.604 [2024-11-29 13:13:13.346737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.346743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.604 [2024-11-29 13:13:13.346757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-11-29 13:13:13.356745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.604 [2024-11-29 13:13:13.356811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.604 [2024-11-29 13:13:13.356826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.604 [2024-11-29 13:13:13.356832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.356839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.604 [2024-11-29 13:13:13.356854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-11-29 13:13:13.366789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.604 [2024-11-29 13:13:13.366851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.604 [2024-11-29 13:13:13.366865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.604 [2024-11-29 13:13:13.366872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.366878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.604 [2024-11-29 13:13:13.366893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.604 qpair failed and we were unable to recover it. 00:29:13.604 [2024-11-29 13:13:13.376824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.604 [2024-11-29 13:13:13.376879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.604 [2024-11-29 13:13:13.376893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.604 [2024-11-29 13:13:13.376900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.604 [2024-11-29 13:13:13.376906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.605 [2024-11-29 13:13:13.376921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-11-29 13:13:13.386851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.605 [2024-11-29 13:13:13.386939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.605 [2024-11-29 13:13:13.386959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.605 [2024-11-29 13:13:13.386966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.605 [2024-11-29 13:13:13.386973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.605 [2024-11-29 13:13:13.386987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-11-29 13:13:13.396870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.605 [2024-11-29 13:13:13.396930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.605 [2024-11-29 13:13:13.396951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.605 [2024-11-29 13:13:13.396958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.605 [2024-11-29 13:13:13.396964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.605 [2024-11-29 13:13:13.396980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-11-29 13:13:13.406905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.605 [2024-11-29 13:13:13.406969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.605 [2024-11-29 13:13:13.406983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.605 [2024-11-29 13:13:13.406990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.605 [2024-11-29 13:13:13.406996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.605 [2024-11-29 13:13:13.407010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.605 [2024-11-29 13:13:13.416940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.605 [2024-11-29 13:13:13.417017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.605 [2024-11-29 13:13:13.417031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.605 [2024-11-29 13:13:13.417037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.605 [2024-11-29 13:13:13.417044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.605 [2024-11-29 13:13:13.417059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.605 qpair failed and we were unable to recover it. 00:29:13.865 [2024-11-29 13:13:13.426967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.865 [2024-11-29 13:13:13.427026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.865 [2024-11-29 13:13:13.427040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.865 [2024-11-29 13:13:13.427047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.865 [2024-11-29 13:13:13.427053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.866 [2024-11-29 13:13:13.427069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-29 13:13:13.436994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.866 [2024-11-29 13:13:13.437057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.866 [2024-11-29 13:13:13.437070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.866 [2024-11-29 13:13:13.437077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.866 [2024-11-29 13:13:13.437083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.866 [2024-11-29 13:13:13.437102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-29 13:13:13.447102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.866 [2024-11-29 13:13:13.447180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.866 [2024-11-29 13:13:13.447194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.866 [2024-11-29 13:13:13.447201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.866 [2024-11-29 13:13:13.447207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.866 [2024-11-29 13:13:13.447222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-29 13:13:13.457052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.866 [2024-11-29 13:13:13.457115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.866 [2024-11-29 13:13:13.457129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.866 [2024-11-29 13:13:13.457135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.866 [2024-11-29 13:13:13.457141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.866 [2024-11-29 13:13:13.457157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-29 13:13:13.467129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.866 [2024-11-29 13:13:13.467225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.866 [2024-11-29 13:13:13.467240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.866 [2024-11-29 13:13:13.467246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.866 [2024-11-29 13:13:13.467252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.866 [2024-11-29 13:13:13.467267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-29 13:13:13.477117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.866 [2024-11-29 13:13:13.477198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.866 [2024-11-29 13:13:13.477213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.866 [2024-11-29 13:13:13.477220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.866 [2024-11-29 13:13:13.477226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.866 [2024-11-29 13:13:13.477242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-29 13:13:13.487114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.866 [2024-11-29 13:13:13.487207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.866 [2024-11-29 13:13:13.487221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.866 [2024-11-29 13:13:13.487228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.866 [2024-11-29 13:13:13.487235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.866 [2024-11-29 13:13:13.487251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-29 13:13:13.497149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.866 [2024-11-29 13:13:13.497202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.866 [2024-11-29 13:13:13.497216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.866 [2024-11-29 13:13:13.497223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.866 [2024-11-29 13:13:13.497229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.866 [2024-11-29 13:13:13.497243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-29 13:13:13.507223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.866 [2024-11-29 13:13:13.507279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.866 [2024-11-29 13:13:13.507293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.866 [2024-11-29 13:13:13.507299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.866 [2024-11-29 13:13:13.507305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.866 [2024-11-29 13:13:13.507320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-29 13:13:13.517208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.866 [2024-11-29 13:13:13.517261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.866 [2024-11-29 13:13:13.517275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.866 [2024-11-29 13:13:13.517282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.866 [2024-11-29 13:13:13.517288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.866 [2024-11-29 13:13:13.517303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-29 13:13:13.527266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.866 [2024-11-29 13:13:13.527359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.866 [2024-11-29 13:13:13.527377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.866 [2024-11-29 13:13:13.527384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.866 [2024-11-29 13:13:13.527390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.866 [2024-11-29 13:13:13.527405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-29 13:13:13.537277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.866 [2024-11-29 13:13:13.537335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.866 [2024-11-29 13:13:13.537349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.866 [2024-11-29 13:13:13.537356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.866 [2024-11-29 13:13:13.537362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.866 [2024-11-29 13:13:13.537377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-29 13:13:13.547313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.866 [2024-11-29 13:13:13.547379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.866 [2024-11-29 13:13:13.547393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.866 [2024-11-29 13:13:13.547400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.866 [2024-11-29 13:13:13.547406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.866 [2024-11-29 13:13:13.547422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.866 qpair failed and we were unable to recover it. 00:29:13.866 [2024-11-29 13:13:13.557392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.866 [2024-11-29 13:13:13.557446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.866 [2024-11-29 13:13:13.557462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.867 [2024-11-29 13:13:13.557469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.867 [2024-11-29 13:13:13.557475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.867 [2024-11-29 13:13:13.557491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-29 13:13:13.567367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.867 [2024-11-29 13:13:13.567424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.867 [2024-11-29 13:13:13.567438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.867 [2024-11-29 13:13:13.567445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.867 [2024-11-29 13:13:13.567454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.867 [2024-11-29 13:13:13.567469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-29 13:13:13.577442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.867 [2024-11-29 13:13:13.577500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.867 [2024-11-29 13:13:13.577515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.867 [2024-11-29 13:13:13.577522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.867 [2024-11-29 13:13:13.577528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.867 [2024-11-29 13:13:13.577544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-29 13:13:13.587341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.867 [2024-11-29 13:13:13.587396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.867 [2024-11-29 13:13:13.587410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.867 [2024-11-29 13:13:13.587417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.867 [2024-11-29 13:13:13.587423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.867 [2024-11-29 13:13:13.587438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-29 13:13:13.597478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.867 [2024-11-29 13:13:13.597536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.867 [2024-11-29 13:13:13.597550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.867 [2024-11-29 13:13:13.597557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.867 [2024-11-29 13:13:13.597563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.867 [2024-11-29 13:13:13.597578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-29 13:13:13.607490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.867 [2024-11-29 13:13:13.607563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.867 [2024-11-29 13:13:13.607579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.867 [2024-11-29 13:13:13.607586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.867 [2024-11-29 13:13:13.607592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.867 [2024-11-29 13:13:13.607608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-29 13:13:13.617562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.867 [2024-11-29 13:13:13.617621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.867 [2024-11-29 13:13:13.617635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.867 [2024-11-29 13:13:13.617641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.867 [2024-11-29 13:13:13.617647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.867 [2024-11-29 13:13:13.617662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-29 13:13:13.627542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.867 [2024-11-29 13:13:13.627598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.867 [2024-11-29 13:13:13.627612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.867 [2024-11-29 13:13:13.627619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.867 [2024-11-29 13:13:13.627625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.867 [2024-11-29 13:13:13.627640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-29 13:13:13.637566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.867 [2024-11-29 13:13:13.637624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.867 [2024-11-29 13:13:13.637638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.867 [2024-11-29 13:13:13.637645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.867 [2024-11-29 13:13:13.637651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.867 [2024-11-29 13:13:13.637666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-29 13:13:13.647603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.867 [2024-11-29 13:13:13.647663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.867 [2024-11-29 13:13:13.647676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.867 [2024-11-29 13:13:13.647683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.867 [2024-11-29 13:13:13.647689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.867 [2024-11-29 13:13:13.647704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-29 13:13:13.657623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.867 [2024-11-29 13:13:13.657694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.867 [2024-11-29 13:13:13.657712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.867 [2024-11-29 13:13:13.657719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.867 [2024-11-29 13:13:13.657725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.867 [2024-11-29 13:13:13.657741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-29 13:13:13.667634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.867 [2024-11-29 13:13:13.667687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.867 [2024-11-29 13:13:13.667701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.867 [2024-11-29 13:13:13.667708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.867 [2024-11-29 13:13:13.667714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.867 [2024-11-29 13:13:13.667730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.867 qpair failed and we were unable to recover it. 00:29:13.867 [2024-11-29 13:13:13.677669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.867 [2024-11-29 13:13:13.677727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.867 [2024-11-29 13:13:13.677741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.867 [2024-11-29 13:13:13.677748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.867 [2024-11-29 13:13:13.677754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:13.867 [2024-11-29 13:13:13.677770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.867 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-29 13:13:13.687633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.127 [2024-11-29 13:13:13.687693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.127 [2024-11-29 13:13:13.687707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.127 [2024-11-29 13:13:13.687713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.127 [2024-11-29 13:13:13.687719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.127 [2024-11-29 13:13:13.687734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-29 13:13:13.697763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.127 [2024-11-29 13:13:13.697825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.127 [2024-11-29 13:13:13.697839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.127 [2024-11-29 13:13:13.697849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.127 [2024-11-29 13:13:13.697855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.127 [2024-11-29 13:13:13.697871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-29 13:13:13.707764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.127 [2024-11-29 13:13:13.707821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.127 [2024-11-29 13:13:13.707835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.127 [2024-11-29 13:13:13.707841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.127 [2024-11-29 13:13:13.707847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.127 [2024-11-29 13:13:13.707862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-29 13:13:13.717732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.127 [2024-11-29 13:13:13.717836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.127 [2024-11-29 13:13:13.717851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.127 [2024-11-29 13:13:13.717858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.127 [2024-11-29 13:13:13.717864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.127 [2024-11-29 13:13:13.717879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-29 13:13:13.727832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.127 [2024-11-29 13:13:13.727899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.127 [2024-11-29 13:13:13.727912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.127 [2024-11-29 13:13:13.727919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.127 [2024-11-29 13:13:13.727925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.127 [2024-11-29 13:13:13.727941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-29 13:13:13.737881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.127 [2024-11-29 13:13:13.737944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.127 [2024-11-29 13:13:13.737961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.127 [2024-11-29 13:13:13.737968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.127 [2024-11-29 13:13:13.737974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.127 [2024-11-29 13:13:13.737989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-29 13:13:13.747918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.127 [2024-11-29 13:13:13.747978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.127 [2024-11-29 13:13:13.747993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.127 [2024-11-29 13:13:13.748000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.127 [2024-11-29 13:13:13.748006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.127 [2024-11-29 13:13:13.748020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.127 qpair failed and we were unable to recover it. 00:29:14.127 [2024-11-29 13:13:13.757873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.757938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.128 [2024-11-29 13:13:13.757957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.128 [2024-11-29 13:13:13.757964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.128 [2024-11-29 13:13:13.757970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.128 [2024-11-29 13:13:13.757986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-29 13:13:13.767953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.768010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.128 [2024-11-29 13:13:13.768024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.128 [2024-11-29 13:13:13.768031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.128 [2024-11-29 13:13:13.768037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.128 [2024-11-29 13:13:13.768051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-29 13:13:13.777986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.778046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.128 [2024-11-29 13:13:13.778059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.128 [2024-11-29 13:13:13.778066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.128 [2024-11-29 13:13:13.778073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.128 [2024-11-29 13:13:13.778088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-29 13:13:13.788028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.788090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.128 [2024-11-29 13:13:13.788103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.128 [2024-11-29 13:13:13.788110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.128 [2024-11-29 13:13:13.788116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.128 [2024-11-29 13:13:13.788131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-29 13:13:13.798033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.798092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.128 [2024-11-29 13:13:13.798106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.128 [2024-11-29 13:13:13.798113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.128 [2024-11-29 13:13:13.798118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.128 [2024-11-29 13:13:13.798133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-29 13:13:13.808070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.808129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.128 [2024-11-29 13:13:13.808144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.128 [2024-11-29 13:13:13.808151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.128 [2024-11-29 13:13:13.808156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.128 [2024-11-29 13:13:13.808172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-29 13:13:13.818024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.818080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.128 [2024-11-29 13:13:13.818094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.128 [2024-11-29 13:13:13.818101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.128 [2024-11-29 13:13:13.818107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.128 [2024-11-29 13:13:13.818122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-29 13:13:13.828127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.828183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.128 [2024-11-29 13:13:13.828196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.128 [2024-11-29 13:13:13.828208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.128 [2024-11-29 13:13:13.828214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.128 [2024-11-29 13:13:13.828229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-29 13:13:13.838150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.838209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.128 [2024-11-29 13:13:13.838223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.128 [2024-11-29 13:13:13.838229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.128 [2024-11-29 13:13:13.838235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.128 [2024-11-29 13:13:13.838250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-29 13:13:13.848199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.848260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.128 [2024-11-29 13:13:13.848273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.128 [2024-11-29 13:13:13.848280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.128 [2024-11-29 13:13:13.848287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.128 [2024-11-29 13:13:13.848302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-29 13:13:13.858274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.858336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.128 [2024-11-29 13:13:13.858350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.128 [2024-11-29 13:13:13.858357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.128 [2024-11-29 13:13:13.858363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.128 [2024-11-29 13:13:13.858377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-29 13:13:13.868213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.868268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.128 [2024-11-29 13:13:13.868282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.128 [2024-11-29 13:13:13.868289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.128 [2024-11-29 13:13:13.868294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.128 [2024-11-29 13:13:13.868312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-29 13:13:13.878261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.878321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.128 [2024-11-29 13:13:13.878334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.128 [2024-11-29 13:13:13.878341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.128 [2024-11-29 13:13:13.878347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.128 [2024-11-29 13:13:13.878362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.128 qpair failed and we were unable to recover it. 00:29:14.128 [2024-11-29 13:13:13.888304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.128 [2024-11-29 13:13:13.888365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.129 [2024-11-29 13:13:13.888380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.129 [2024-11-29 13:13:13.888387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.129 [2024-11-29 13:13:13.888393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.129 [2024-11-29 13:13:13.888408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-29 13:13:13.898323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.129 [2024-11-29 13:13:13.898382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.129 [2024-11-29 13:13:13.898397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.129 [2024-11-29 13:13:13.898404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.129 [2024-11-29 13:13:13.898410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.129 [2024-11-29 13:13:13.898426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-29 13:13:13.908402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.129 [2024-11-29 13:13:13.908464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.129 [2024-11-29 13:13:13.908478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.129 [2024-11-29 13:13:13.908484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.129 [2024-11-29 13:13:13.908491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.129 [2024-11-29 13:13:13.908505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-29 13:13:13.918376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.129 [2024-11-29 13:13:13.918432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.129 [2024-11-29 13:13:13.918446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.129 [2024-11-29 13:13:13.918453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.129 [2024-11-29 13:13:13.918459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.129 [2024-11-29 13:13:13.918473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-29 13:13:13.928417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.129 [2024-11-29 13:13:13.928477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.129 [2024-11-29 13:13:13.928491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.129 [2024-11-29 13:13:13.928497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.129 [2024-11-29 13:13:13.928503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.129 [2024-11-29 13:13:13.928518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.129 [2024-11-29 13:13:13.938439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.129 [2024-11-29 13:13:13.938506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.129 [2024-11-29 13:13:13.938529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.129 [2024-11-29 13:13:13.938536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.129 [2024-11-29 13:13:13.938542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.129 [2024-11-29 13:13:13.938563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.129 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:13.948456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:13.948515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:13.948529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:13.948536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:13.948542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:13.948557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:13.958485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:13.958540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:13.958557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:13.958564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:13.958570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:13.958585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:13.968572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:13.968673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:13.968688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:13.968695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:13.968701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:13.968717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:13.978553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:13.978611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:13.978625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:13.978631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:13.978637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:13.978652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:13.988575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:13.988633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:13.988646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:13.988652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:13.988658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:13.988673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:13.998602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:13.998658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:13.998672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:13.998679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:13.998685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:13.998703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:14.008651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:14.008727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:14.008742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:14.008749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:14.008755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:14.008770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:14.018671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:14.018786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:14.018800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:14.018807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:14.018813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:14.018829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:14.028683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:14.028739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:14.028752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:14.028759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:14.028765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:14.028780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:14.038724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:14.038780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:14.038794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:14.038800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:14.038806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:14.038821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:14.048791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:14.048847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:14.048862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:14.048868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:14.048874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:14.048889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:14.058825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:14.058901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:14.058915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:14.058922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:14.058928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:14.058943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:14.068879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:14.068963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:14.068980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:14.068987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:14.068993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:14.069009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:14.078866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:14.078926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:14.078941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:14.078952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:14.078958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:14.078973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:14.088869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:14.088924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:14.088942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:14.088952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:14.088959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:14.088974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:14.098866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.397 [2024-11-29 13:13:14.098932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.397 [2024-11-29 13:13:14.098946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.397 [2024-11-29 13:13:14.098957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.397 [2024-11-29 13:13:14.098963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.397 [2024-11-29 13:13:14.098978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.397 qpair failed and we were unable to recover it. 00:29:14.397 [2024-11-29 13:13:14.108909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-29 13:13:14.108972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-29 13:13:14.108987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-29 13:13:14.108994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-29 13:13:14.109000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.398 [2024-11-29 13:13:14.109015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-29 13:13:14.118918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-29 13:13:14.119010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-29 13:13:14.119025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-29 13:13:14.119031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-29 13:13:14.119038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.398 [2024-11-29 13:13:14.119055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-29 13:13:14.129020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-29 13:13:14.129081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-29 13:13:14.129095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-29 13:13:14.129101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-29 13:13:14.129110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.398 [2024-11-29 13:13:14.129125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-29 13:13:14.138994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-29 13:13:14.139056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-29 13:13:14.139070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-29 13:13:14.139076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-29 13:13:14.139082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.398 [2024-11-29 13:13:14.139097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-29 13:13:14.149010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-29 13:13:14.149068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-29 13:13:14.149081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-29 13:13:14.149088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-29 13:13:14.149095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.398 [2024-11-29 13:13:14.149110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-29 13:13:14.159045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-29 13:13:14.159101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-29 13:13:14.159116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-29 13:13:14.159123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-29 13:13:14.159129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.398 [2024-11-29 13:13:14.159144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-29 13:13:14.169025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-29 13:13:14.169082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-29 13:13:14.169097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-29 13:13:14.169104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-29 13:13:14.169110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.398 [2024-11-29 13:13:14.169125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-29 13:13:14.179168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-29 13:13:14.179225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-29 13:13:14.179238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-29 13:13:14.179245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-29 13:13:14.179251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.398 [2024-11-29 13:13:14.179265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-29 13:13:14.189138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-29 13:13:14.189194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-29 13:13:14.189208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-29 13:13:14.189215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-29 13:13:14.189220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.398 [2024-11-29 13:13:14.189235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-29 13:13:14.199152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-29 13:13:14.199209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-29 13:13:14.199224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-29 13:13:14.199230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-29 13:13:14.199236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.398 [2024-11-29 13:13:14.199251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-29 13:13:14.209216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-29 13:13:14.209298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-29 13:13:14.209312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-29 13:13:14.209319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-29 13:13:14.209325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.398 [2024-11-29 13:13:14.209340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.658 [2024-11-29 13:13:14.219156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.658 [2024-11-29 13:13:14.219217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.658 [2024-11-29 13:13:14.219234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.658 [2024-11-29 13:13:14.219240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.658 [2024-11-29 13:13:14.219246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.658 [2024-11-29 13:13:14.219261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-11-29 13:13:14.229267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.658 [2024-11-29 13:13:14.229350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.658 [2024-11-29 13:13:14.229365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.658 [2024-11-29 13:13:14.229372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.658 [2024-11-29 13:13:14.229378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.658 [2024-11-29 13:13:14.229393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-11-29 13:13:14.239249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.658 [2024-11-29 13:13:14.239315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.658 [2024-11-29 13:13:14.239328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.658 [2024-11-29 13:13:14.239335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.658 [2024-11-29 13:13:14.239342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.658 [2024-11-29 13:13:14.239357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-11-29 13:13:14.249310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.658 [2024-11-29 13:13:14.249367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.658 [2024-11-29 13:13:14.249380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.658 [2024-11-29 13:13:14.249387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.658 [2024-11-29 13:13:14.249393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.658 [2024-11-29 13:13:14.249408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-11-29 13:13:14.259295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.658 [2024-11-29 13:13:14.259356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.658 [2024-11-29 13:13:14.259370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.658 [2024-11-29 13:13:14.259380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.658 [2024-11-29 13:13:14.259386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.658 [2024-11-29 13:13:14.259402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-11-29 13:13:14.269331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.658 [2024-11-29 13:13:14.269383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.658 [2024-11-29 13:13:14.269397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.658 [2024-11-29 13:13:14.269403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.658 [2024-11-29 13:13:14.269409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.658 [2024-11-29 13:13:14.269424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-11-29 13:13:14.279381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.658 [2024-11-29 13:13:14.279450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.658 [2024-11-29 13:13:14.279464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.658 [2024-11-29 13:13:14.279470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.658 [2024-11-29 13:13:14.279477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.658 [2024-11-29 13:13:14.279496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-11-29 13:13:14.289426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.658 [2024-11-29 13:13:14.289483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.658 [2024-11-29 13:13:14.289497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.658 [2024-11-29 13:13:14.289504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.658 [2024-11-29 13:13:14.289509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.658 [2024-11-29 13:13:14.289524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-11-29 13:13:14.299453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.658 [2024-11-29 13:13:14.299511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.658 [2024-11-29 13:13:14.299525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.658 [2024-11-29 13:13:14.299531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.658 [2024-11-29 13:13:14.299537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.658 [2024-11-29 13:13:14.299553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-11-29 13:13:14.309508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.658 [2024-11-29 13:13:14.309572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.658 [2024-11-29 13:13:14.309586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.658 [2024-11-29 13:13:14.309593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.658 [2024-11-29 13:13:14.309599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.658 [2024-11-29 13:13:14.309615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.658 qpair failed and we were unable to recover it. 00:29:14.658 [2024-11-29 13:13:14.319511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.319566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.659 [2024-11-29 13:13:14.319580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.659 [2024-11-29 13:13:14.319587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.659 [2024-11-29 13:13:14.319593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.659 [2024-11-29 13:13:14.319608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-11-29 13:13:14.329551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.329612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.659 [2024-11-29 13:13:14.329626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.659 [2024-11-29 13:13:14.329633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.659 [2024-11-29 13:13:14.329639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.659 [2024-11-29 13:13:14.329654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-11-29 13:13:14.339501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.339563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.659 [2024-11-29 13:13:14.339576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.659 [2024-11-29 13:13:14.339583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.659 [2024-11-29 13:13:14.339589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.659 [2024-11-29 13:13:14.339604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-11-29 13:13:14.349599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.349658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.659 [2024-11-29 13:13:14.349672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.659 [2024-11-29 13:13:14.349678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.659 [2024-11-29 13:13:14.349684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.659 [2024-11-29 13:13:14.349699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-11-29 13:13:14.359620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.359675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.659 [2024-11-29 13:13:14.359689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.659 [2024-11-29 13:13:14.359695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.659 [2024-11-29 13:13:14.359701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.659 [2024-11-29 13:13:14.359716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-11-29 13:13:14.369613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.369674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.659 [2024-11-29 13:13:14.369688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.659 [2024-11-29 13:13:14.369695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.659 [2024-11-29 13:13:14.369701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.659 [2024-11-29 13:13:14.369716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-11-29 13:13:14.379679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.379733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.659 [2024-11-29 13:13:14.379747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.659 [2024-11-29 13:13:14.379754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.659 [2024-11-29 13:13:14.379760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.659 [2024-11-29 13:13:14.379775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-11-29 13:13:14.389735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.389796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.659 [2024-11-29 13:13:14.389810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.659 [2024-11-29 13:13:14.389820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.659 [2024-11-29 13:13:14.389826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.659 [2024-11-29 13:13:14.389841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-11-29 13:13:14.399666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.399726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.659 [2024-11-29 13:13:14.399740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.659 [2024-11-29 13:13:14.399746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.659 [2024-11-29 13:13:14.399752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.659 [2024-11-29 13:13:14.399767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-11-29 13:13:14.409779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.409838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.659 [2024-11-29 13:13:14.409852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.659 [2024-11-29 13:13:14.409859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.659 [2024-11-29 13:13:14.409866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.659 [2024-11-29 13:13:14.409881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-11-29 13:13:14.419744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.419804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.659 [2024-11-29 13:13:14.419818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.659 [2024-11-29 13:13:14.419825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.659 [2024-11-29 13:13:14.419831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.659 [2024-11-29 13:13:14.419848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-11-29 13:13:14.429821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.429876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.659 [2024-11-29 13:13:14.429890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.659 [2024-11-29 13:13:14.429897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.659 [2024-11-29 13:13:14.429903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.659 [2024-11-29 13:13:14.429924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-11-29 13:13:14.439796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.439851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.659 [2024-11-29 13:13:14.439865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.659 [2024-11-29 13:13:14.439872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.659 [2024-11-29 13:13:14.439878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.659 [2024-11-29 13:13:14.439893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.659 qpair failed and we were unable to recover it. 00:29:14.659 [2024-11-29 13:13:14.449825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.659 [2024-11-29 13:13:14.449886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.660 [2024-11-29 13:13:14.449900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.660 [2024-11-29 13:13:14.449907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.660 [2024-11-29 13:13:14.449913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.660 [2024-11-29 13:13:14.449927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-11-29 13:13:14.459907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.660 [2024-11-29 13:13:14.459967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.660 [2024-11-29 13:13:14.459981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.660 [2024-11-29 13:13:14.459988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.660 [2024-11-29 13:13:14.459994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.660 [2024-11-29 13:13:14.460010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.660 [2024-11-29 13:13:14.469884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.660 [2024-11-29 13:13:14.469944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.660 [2024-11-29 13:13:14.469962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.660 [2024-11-29 13:13:14.469968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.660 [2024-11-29 13:13:14.469974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.660 [2024-11-29 13:13:14.469990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.660 qpair failed and we were unable to recover it. 00:29:14.920 [2024-11-29 13:13:14.479915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.920 [2024-11-29 13:13:14.479981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.920 [2024-11-29 13:13:14.479996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.920 [2024-11-29 13:13:14.480003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.920 [2024-11-29 13:13:14.480009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.920 [2024-11-29 13:13:14.480024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.920 qpair failed and we were unable to recover it. 00:29:14.920 [2024-11-29 13:13:14.490029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.920 [2024-11-29 13:13:14.490099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.920 [2024-11-29 13:13:14.490113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.920 [2024-11-29 13:13:14.490120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.920 [2024-11-29 13:13:14.490126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.920 [2024-11-29 13:13:14.490141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.920 qpair failed and we were unable to recover it. 00:29:14.920 [2024-11-29 13:13:14.500057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.920 [2024-11-29 13:13:14.500138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.920 [2024-11-29 13:13:14.500153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.920 [2024-11-29 13:13:14.500160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.920 [2024-11-29 13:13:14.500166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.920 [2024-11-29 13:13:14.500182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.920 qpair failed and we were unable to recover it. 00:29:14.920 [2024-11-29 13:13:14.510067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.920 [2024-11-29 13:13:14.510128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.920 [2024-11-29 13:13:14.510142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.920 [2024-11-29 13:13:14.510149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.920 [2024-11-29 13:13:14.510155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.920 [2024-11-29 13:13:14.510170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.920 qpair failed and we were unable to recover it. 00:29:14.920 [2024-11-29 13:13:14.520117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.920 [2024-11-29 13:13:14.520188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.920 [2024-11-29 13:13:14.520206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.920 [2024-11-29 13:13:14.520213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.920 [2024-11-29 13:13:14.520219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.920 [2024-11-29 13:13:14.520233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.920 qpair failed and we were unable to recover it. 00:29:14.920 [2024-11-29 13:13:14.530129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.920 [2024-11-29 13:13:14.530187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.920 [2024-11-29 13:13:14.530201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.920 [2024-11-29 13:13:14.530207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.920 [2024-11-29 13:13:14.530213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.530228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.921 qpair failed and we were unable to recover it. 00:29:14.921 [2024-11-29 13:13:14.540161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.921 [2024-11-29 13:13:14.540220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.921 [2024-11-29 13:13:14.540233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.921 [2024-11-29 13:13:14.540240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.921 [2024-11-29 13:13:14.540246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.540260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.921 qpair failed and we were unable to recover it. 00:29:14.921 [2024-11-29 13:13:14.550183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.921 [2024-11-29 13:13:14.550235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.921 [2024-11-29 13:13:14.550249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.921 [2024-11-29 13:13:14.550255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.921 [2024-11-29 13:13:14.550261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.550276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.921 qpair failed and we were unable to recover it. 00:29:14.921 [2024-11-29 13:13:14.560228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.921 [2024-11-29 13:13:14.560283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.921 [2024-11-29 13:13:14.560297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.921 [2024-11-29 13:13:14.560304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.921 [2024-11-29 13:13:14.560315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.560331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.921 qpair failed and we were unable to recover it. 00:29:14.921 [2024-11-29 13:13:14.570255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.921 [2024-11-29 13:13:14.570312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.921 [2024-11-29 13:13:14.570325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.921 [2024-11-29 13:13:14.570332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.921 [2024-11-29 13:13:14.570338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.570353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.921 qpair failed and we were unable to recover it. 00:29:14.921 [2024-11-29 13:13:14.580266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.921 [2024-11-29 13:13:14.580342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.921 [2024-11-29 13:13:14.580357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.921 [2024-11-29 13:13:14.580364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.921 [2024-11-29 13:13:14.580370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.580385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.921 qpair failed and we were unable to recover it. 00:29:14.921 [2024-11-29 13:13:14.590290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.921 [2024-11-29 13:13:14.590374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.921 [2024-11-29 13:13:14.590389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.921 [2024-11-29 13:13:14.590395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.921 [2024-11-29 13:13:14.590402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.590417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.921 qpair failed and we were unable to recover it. 00:29:14.921 [2024-11-29 13:13:14.600311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.921 [2024-11-29 13:13:14.600363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.921 [2024-11-29 13:13:14.600378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.921 [2024-11-29 13:13:14.600385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.921 [2024-11-29 13:13:14.600391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.600406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.921 qpair failed and we were unable to recover it. 00:29:14.921 [2024-11-29 13:13:14.610395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.921 [2024-11-29 13:13:14.610474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.921 [2024-11-29 13:13:14.610489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.921 [2024-11-29 13:13:14.610496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.921 [2024-11-29 13:13:14.610502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.610518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.921 qpair failed and we were unable to recover it. 00:29:14.921 [2024-11-29 13:13:14.620441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.921 [2024-11-29 13:13:14.620497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.921 [2024-11-29 13:13:14.620511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.921 [2024-11-29 13:13:14.620518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.921 [2024-11-29 13:13:14.620523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.620538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.921 qpair failed and we were unable to recover it. 00:29:14.921 [2024-11-29 13:13:14.630405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.921 [2024-11-29 13:13:14.630463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.921 [2024-11-29 13:13:14.630478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.921 [2024-11-29 13:13:14.630485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.921 [2024-11-29 13:13:14.630491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.630506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.921 qpair failed and we were unable to recover it. 00:29:14.921 [2024-11-29 13:13:14.640432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.921 [2024-11-29 13:13:14.640490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.921 [2024-11-29 13:13:14.640504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.921 [2024-11-29 13:13:14.640511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.921 [2024-11-29 13:13:14.640517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.640532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.921 qpair failed and we were unable to recover it. 00:29:14.921 [2024-11-29 13:13:14.650468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.921 [2024-11-29 13:13:14.650529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.921 [2024-11-29 13:13:14.650546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.921 [2024-11-29 13:13:14.650553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.921 [2024-11-29 13:13:14.650559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.650574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.921 qpair failed and we were unable to recover it. 00:29:14.921 [2024-11-29 13:13:14.660509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.921 [2024-11-29 13:13:14.660580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.921 [2024-11-29 13:13:14.660594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.921 [2024-11-29 13:13:14.660601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.921 [2024-11-29 13:13:14.660607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.921 [2024-11-29 13:13:14.660622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.922 qpair failed and we were unable to recover it. 00:29:14.922 [2024-11-29 13:13:14.670535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.922 [2024-11-29 13:13:14.670605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.922 [2024-11-29 13:13:14.670618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.922 [2024-11-29 13:13:14.670626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.922 [2024-11-29 13:13:14.670632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.922 [2024-11-29 13:13:14.670646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.922 qpair failed and we were unable to recover it. 00:29:14.922 [2024-11-29 13:13:14.680557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.922 [2024-11-29 13:13:14.680613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.922 [2024-11-29 13:13:14.680626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.922 [2024-11-29 13:13:14.680632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.922 [2024-11-29 13:13:14.680638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.922 [2024-11-29 13:13:14.680653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.922 qpair failed and we were unable to recover it. 00:29:14.922 [2024-11-29 13:13:14.690555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.922 [2024-11-29 13:13:14.690620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.922 [2024-11-29 13:13:14.690633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.922 [2024-11-29 13:13:14.690640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.922 [2024-11-29 13:13:14.690649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.922 [2024-11-29 13:13:14.690664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.922 qpair failed and we were unable to recover it. 00:29:14.922 [2024-11-29 13:13:14.700600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.922 [2024-11-29 13:13:14.700656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.922 [2024-11-29 13:13:14.700671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.922 [2024-11-29 13:13:14.700679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.922 [2024-11-29 13:13:14.700686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.922 [2024-11-29 13:13:14.700701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.922 qpair failed and we were unable to recover it. 00:29:14.922 [2024-11-29 13:13:14.710670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.922 [2024-11-29 13:13:14.710770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.922 [2024-11-29 13:13:14.710785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.922 [2024-11-29 13:13:14.710793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.922 [2024-11-29 13:13:14.710800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.922 [2024-11-29 13:13:14.710816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.922 qpair failed and we were unable to recover it. 00:29:14.922 [2024-11-29 13:13:14.720665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.922 [2024-11-29 13:13:14.720720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.922 [2024-11-29 13:13:14.720735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.922 [2024-11-29 13:13:14.720741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.922 [2024-11-29 13:13:14.720748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.922 [2024-11-29 13:13:14.720763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.922 qpair failed and we were unable to recover it. 00:29:14.922 [2024-11-29 13:13:14.730723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.922 [2024-11-29 13:13:14.730780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.922 [2024-11-29 13:13:14.730794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.922 [2024-11-29 13:13:14.730801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.922 [2024-11-29 13:13:14.730807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:14.922 [2024-11-29 13:13:14.730822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.922 qpair failed and we were unable to recover it. 00:29:15.183 [2024-11-29 13:13:14.740727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.183 [2024-11-29 13:13:14.740785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.183 [2024-11-29 13:13:14.740800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.183 [2024-11-29 13:13:14.740807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.183 [2024-11-29 13:13:14.740812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.183 [2024-11-29 13:13:14.740828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.183 qpair failed and we were unable to recover it. 00:29:15.184 [2024-11-29 13:13:14.750670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.184 [2024-11-29 13:13:14.750731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.184 [2024-11-29 13:13:14.750745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.184 [2024-11-29 13:13:14.750752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.184 [2024-11-29 13:13:14.750758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.184 [2024-11-29 13:13:14.750774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.184 qpair failed and we were unable to recover it. 00:29:15.184 [2024-11-29 13:13:14.760737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.184 [2024-11-29 13:13:14.760793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.184 [2024-11-29 13:13:14.760807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.184 [2024-11-29 13:13:14.760814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.184 [2024-11-29 13:13:14.760820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.184 [2024-11-29 13:13:14.760834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.184 qpair failed and we were unable to recover it. 00:29:15.184 [2024-11-29 13:13:14.770805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.184 [2024-11-29 13:13:14.770890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.184 [2024-11-29 13:13:14.770905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.184 [2024-11-29 13:13:14.770912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.184 [2024-11-29 13:13:14.770919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.184 [2024-11-29 13:13:14.770934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.184 qpair failed and we were unable to recover it. 00:29:15.184 [2024-11-29 13:13:14.780837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.184 [2024-11-29 13:13:14.780894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.184 [2024-11-29 13:13:14.780911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.184 [2024-11-29 13:13:14.780918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.184 [2024-11-29 13:13:14.780924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.184 [2024-11-29 13:13:14.780940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.184 qpair failed and we were unable to recover it. 00:29:15.184 [2024-11-29 13:13:14.790867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.184 [2024-11-29 13:13:14.790920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.184 [2024-11-29 13:13:14.790934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.184 [2024-11-29 13:13:14.790941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.184 [2024-11-29 13:13:14.790950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.184 [2024-11-29 13:13:14.790966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.184 qpair failed and we were unable to recover it. 00:29:15.184 [2024-11-29 13:13:14.800896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.184 [2024-11-29 13:13:14.800955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.184 [2024-11-29 13:13:14.800970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.184 [2024-11-29 13:13:14.800976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.184 [2024-11-29 13:13:14.800982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.184 [2024-11-29 13:13:14.800998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.184 qpair failed and we were unable to recover it. 00:29:15.184 [2024-11-29 13:13:14.810940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.184 [2024-11-29 13:13:14.811026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.184 [2024-11-29 13:13:14.811041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.184 [2024-11-29 13:13:14.811048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.184 [2024-11-29 13:13:14.811054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.184 [2024-11-29 13:13:14.811070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.184 qpair failed and we were unable to recover it. 00:29:15.184 [2024-11-29 13:13:14.820955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.184 [2024-11-29 13:13:14.821012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.184 [2024-11-29 13:13:14.821027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.184 [2024-11-29 13:13:14.821036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.184 [2024-11-29 13:13:14.821042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.184 [2024-11-29 13:13:14.821057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.184 qpair failed and we were unable to recover it. 00:29:15.184 [2024-11-29 13:13:14.830975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.184 [2024-11-29 13:13:14.831035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.184 [2024-11-29 13:13:14.831048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.184 [2024-11-29 13:13:14.831055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.184 [2024-11-29 13:13:14.831061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.184 [2024-11-29 13:13:14.831076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.184 qpair failed and we were unable to recover it. 00:29:15.184 [2024-11-29 13:13:14.841009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.184 [2024-11-29 13:13:14.841066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.184 [2024-11-29 13:13:14.841080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.184 [2024-11-29 13:13:14.841087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.184 [2024-11-29 13:13:14.841093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.185 [2024-11-29 13:13:14.841108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.185 qpair failed and we were unable to recover it. 00:29:15.185 [2024-11-29 13:13:14.851037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.185 [2024-11-29 13:13:14.851096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.185 [2024-11-29 13:13:14.851110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.185 [2024-11-29 13:13:14.851117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.185 [2024-11-29 13:13:14.851123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.185 [2024-11-29 13:13:14.851138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.185 qpair failed and we were unable to recover it. 00:29:15.185 [2024-11-29 13:13:14.861058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.185 [2024-11-29 13:13:14.861115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.185 [2024-11-29 13:13:14.861129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.185 [2024-11-29 13:13:14.861135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.185 [2024-11-29 13:13:14.861141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.185 [2024-11-29 13:13:14.861156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.185 qpair failed and we were unable to recover it. 00:29:15.185 [2024-11-29 13:13:14.871092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.185 [2024-11-29 13:13:14.871156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.185 [2024-11-29 13:13:14.871170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.185 [2024-11-29 13:13:14.871177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.185 [2024-11-29 13:13:14.871183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.185 [2024-11-29 13:13:14.871198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.185 qpair failed and we were unable to recover it. 00:29:15.185 [2024-11-29 13:13:14.881107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.185 [2024-11-29 13:13:14.881181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.185 [2024-11-29 13:13:14.881198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.185 [2024-11-29 13:13:14.881205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.185 [2024-11-29 13:13:14.881211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.185 [2024-11-29 13:13:14.881226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.185 qpair failed and we were unable to recover it. 00:29:15.185 [2024-11-29 13:13:14.891151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.185 [2024-11-29 13:13:14.891207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.185 [2024-11-29 13:13:14.891221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.185 [2024-11-29 13:13:14.891228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.185 [2024-11-29 13:13:14.891234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.185 [2024-11-29 13:13:14.891250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.185 qpair failed and we were unable to recover it. 00:29:15.185 [2024-11-29 13:13:14.901230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.185 [2024-11-29 13:13:14.901289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.185 [2024-11-29 13:13:14.901303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.185 [2024-11-29 13:13:14.901311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.185 [2024-11-29 13:13:14.901317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.185 [2024-11-29 13:13:14.901334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.185 qpair failed and we were unable to recover it. 00:29:15.185 [2024-11-29 13:13:14.911250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.185 [2024-11-29 13:13:14.911310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.185 [2024-11-29 13:13:14.911325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.185 [2024-11-29 13:13:14.911332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.185 [2024-11-29 13:13:14.911338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.185 [2024-11-29 13:13:14.911353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.185 qpair failed and we were unable to recover it. 00:29:15.185 [2024-11-29 13:13:14.921237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.185 [2024-11-29 13:13:14.921294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.185 [2024-11-29 13:13:14.921308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.185 [2024-11-29 13:13:14.921315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.185 [2024-11-29 13:13:14.921321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.185 [2024-11-29 13:13:14.921336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.185 qpair failed and we were unable to recover it. 00:29:15.185 [2024-11-29 13:13:14.931212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.185 [2024-11-29 13:13:14.931270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.185 [2024-11-29 13:13:14.931284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.185 [2024-11-29 13:13:14.931291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.185 [2024-11-29 13:13:14.931296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.185 [2024-11-29 13:13:14.931311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.185 qpair failed and we were unable to recover it. 00:29:15.185 [2024-11-29 13:13:14.941254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.185 [2024-11-29 13:13:14.941321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.185 [2024-11-29 13:13:14.941335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.186 [2024-11-29 13:13:14.941342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.186 [2024-11-29 13:13:14.941348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.186 [2024-11-29 13:13:14.941364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.186 qpair failed and we were unable to recover it. 00:29:15.186 [2024-11-29 13:13:14.951346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.186 [2024-11-29 13:13:14.951400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.186 [2024-11-29 13:13:14.951414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.186 [2024-11-29 13:13:14.951423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.186 [2024-11-29 13:13:14.951430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.186 [2024-11-29 13:13:14.951446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.186 qpair failed and we were unable to recover it. 00:29:15.186 [2024-11-29 13:13:14.961355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.186 [2024-11-29 13:13:14.961410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.186 [2024-11-29 13:13:14.961424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.186 [2024-11-29 13:13:14.961431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.186 [2024-11-29 13:13:14.961436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.186 [2024-11-29 13:13:14.961451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.186 qpair failed and we were unable to recover it. 00:29:15.186 [2024-11-29 13:13:14.971398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.186 [2024-11-29 13:13:14.971468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.186 [2024-11-29 13:13:14.971482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.186 [2024-11-29 13:13:14.971489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.186 [2024-11-29 13:13:14.971495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.186 [2024-11-29 13:13:14.971510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.186 qpair failed and we were unable to recover it. 00:29:15.186 [2024-11-29 13:13:14.981423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.186 [2024-11-29 13:13:14.981483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.186 [2024-11-29 13:13:14.981497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.186 [2024-11-29 13:13:14.981504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.186 [2024-11-29 13:13:14.981509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.186 [2024-11-29 13:13:14.981525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.186 qpair failed and we were unable to recover it. 00:29:15.186 [2024-11-29 13:13:14.991448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.186 [2024-11-29 13:13:14.991505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.186 [2024-11-29 13:13:14.991518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.186 [2024-11-29 13:13:14.991525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.186 [2024-11-29 13:13:14.991531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.186 [2024-11-29 13:13:14.991549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.186 qpair failed and we were unable to recover it. 00:29:15.447 [2024-11-29 13:13:15.001479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.447 [2024-11-29 13:13:15.001536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.447 [2024-11-29 13:13:15.001550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.447 [2024-11-29 13:13:15.001557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.447 [2024-11-29 13:13:15.001563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.447 [2024-11-29 13:13:15.001577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.447 qpair failed and we were unable to recover it. 00:29:15.447 [2024-11-29 13:13:15.011507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.447 [2024-11-29 13:13:15.011563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.447 [2024-11-29 13:13:15.011576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.447 [2024-11-29 13:13:15.011583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.447 [2024-11-29 13:13:15.011589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.447 [2024-11-29 13:13:15.011604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.447 qpair failed and we were unable to recover it. 00:29:15.447 [2024-11-29 13:13:15.021591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.447 [2024-11-29 13:13:15.021651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.447 [2024-11-29 13:13:15.021664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.447 [2024-11-29 13:13:15.021670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.447 [2024-11-29 13:13:15.021676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.447 [2024-11-29 13:13:15.021691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.447 qpair failed and we were unable to recover it. 00:29:15.448 [2024-11-29 13:13:15.031596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.448 [2024-11-29 13:13:15.031668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.448 [2024-11-29 13:13:15.031682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.448 [2024-11-29 13:13:15.031689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.448 [2024-11-29 13:13:15.031695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.448 [2024-11-29 13:13:15.031711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.448 qpair failed and we were unable to recover it. 00:29:15.448 [2024-11-29 13:13:15.041609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.448 [2024-11-29 13:13:15.041668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.448 [2024-11-29 13:13:15.041682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.448 [2024-11-29 13:13:15.041689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.448 [2024-11-29 13:13:15.041695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.448 [2024-11-29 13:13:15.041710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.448 qpair failed and we were unable to recover it. 00:29:15.448 [2024-11-29 13:13:15.051634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.448 [2024-11-29 13:13:15.051691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.448 [2024-11-29 13:13:15.051706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.448 [2024-11-29 13:13:15.051713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.448 [2024-11-29 13:13:15.051718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.448 [2024-11-29 13:13:15.051733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.448 qpair failed and we were unable to recover it. 00:29:15.448 [2024-11-29 13:13:15.061687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.448 [2024-11-29 13:13:15.061750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.448 [2024-11-29 13:13:15.061765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.448 [2024-11-29 13:13:15.061771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.448 [2024-11-29 13:13:15.061777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.448 [2024-11-29 13:13:15.061792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.448 qpair failed and we were unable to recover it. 00:29:15.448 [2024-11-29 13:13:15.071723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.448 [2024-11-29 13:13:15.071777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.448 [2024-11-29 13:13:15.071791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.448 [2024-11-29 13:13:15.071797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.448 [2024-11-29 13:13:15.071804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.448 [2024-11-29 13:13:15.071819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.448 qpair failed and we were unable to recover it. 00:29:15.448 [2024-11-29 13:13:15.081710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.448 [2024-11-29 13:13:15.081766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.448 [2024-11-29 13:13:15.081783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.448 [2024-11-29 13:13:15.081790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.448 [2024-11-29 13:13:15.081796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.448 [2024-11-29 13:13:15.081811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.448 qpair failed and we were unable to recover it. 00:29:15.448 [2024-11-29 13:13:15.091791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.448 [2024-11-29 13:13:15.091847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.448 [2024-11-29 13:13:15.091860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.448 [2024-11-29 13:13:15.091867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.448 [2024-11-29 13:13:15.091873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.448 [2024-11-29 13:13:15.091888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.448 qpair failed and we were unable to recover it. 00:29:15.448 [2024-11-29 13:13:15.101768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.448 [2024-11-29 13:13:15.101826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.448 [2024-11-29 13:13:15.101840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.448 [2024-11-29 13:13:15.101847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.448 [2024-11-29 13:13:15.101853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.448 [2024-11-29 13:13:15.101868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.448 qpair failed and we were unable to recover it. 00:29:15.448 [2024-11-29 13:13:15.111821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.448 [2024-11-29 13:13:15.111883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.448 [2024-11-29 13:13:15.111896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.448 [2024-11-29 13:13:15.111903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.448 [2024-11-29 13:13:15.111909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.448 [2024-11-29 13:13:15.111924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.448 qpair failed and we were unable to recover it. 00:29:15.448 [2024-11-29 13:13:15.121814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.448 [2024-11-29 13:13:15.121914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.448 [2024-11-29 13:13:15.121929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.448 [2024-11-29 13:13:15.121936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.448 [2024-11-29 13:13:15.121945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.448 [2024-11-29 13:13:15.121965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.448 qpair failed and we were unable to recover it. 00:29:15.448 [2024-11-29 13:13:15.131862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.449 [2024-11-29 13:13:15.131918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.449 [2024-11-29 13:13:15.131932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.449 [2024-11-29 13:13:15.131939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.449 [2024-11-29 13:13:15.131945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.449 [2024-11-29 13:13:15.131966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.449 qpair failed and we were unable to recover it. 00:29:15.449 [2024-11-29 13:13:15.141884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.449 [2024-11-29 13:13:15.141941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.449 [2024-11-29 13:13:15.141958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.449 [2024-11-29 13:13:15.141964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.449 [2024-11-29 13:13:15.141970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.449 [2024-11-29 13:13:15.141985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.449 qpair failed and we were unable to recover it. 00:29:15.449 [2024-11-29 13:13:15.151957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.449 [2024-11-29 13:13:15.152015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.449 [2024-11-29 13:13:15.152029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.449 [2024-11-29 13:13:15.152035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.449 [2024-11-29 13:13:15.152041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.449 [2024-11-29 13:13:15.152057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.449 qpair failed and we were unable to recover it. 00:29:15.449 [2024-11-29 13:13:15.161938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.449 [2024-11-29 13:13:15.161993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.449 [2024-11-29 13:13:15.162007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.449 [2024-11-29 13:13:15.162014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.449 [2024-11-29 13:13:15.162019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.449 [2024-11-29 13:13:15.162034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.449 qpair failed and we were unable to recover it. 00:29:15.449 [2024-11-29 13:13:15.171986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.449 [2024-11-29 13:13:15.172041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.449 [2024-11-29 13:13:15.172055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.449 [2024-11-29 13:13:15.172062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.449 [2024-11-29 13:13:15.172068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.449 [2024-11-29 13:13:15.172083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.449 qpair failed and we were unable to recover it. 00:29:15.449 [2024-11-29 13:13:15.182054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.449 [2024-11-29 13:13:15.182116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.449 [2024-11-29 13:13:15.182129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.449 [2024-11-29 13:13:15.182136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.449 [2024-11-29 13:13:15.182142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.449 [2024-11-29 13:13:15.182156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.449 qpair failed and we were unable to recover it. 00:29:15.449 [2024-11-29 13:13:15.192037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.449 [2024-11-29 13:13:15.192093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.449 [2024-11-29 13:13:15.192106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.449 [2024-11-29 13:13:15.192113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.449 [2024-11-29 13:13:15.192119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.449 [2024-11-29 13:13:15.192134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.449 qpair failed and we were unable to recover it. 00:29:15.449 [2024-11-29 13:13:15.202029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.449 [2024-11-29 13:13:15.202088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.449 [2024-11-29 13:13:15.202102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.449 [2024-11-29 13:13:15.202108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.449 [2024-11-29 13:13:15.202114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.449 [2024-11-29 13:13:15.202129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.449 qpair failed and we were unable to recover it. 00:29:15.449 [2024-11-29 13:13:15.212081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.449 [2024-11-29 13:13:15.212139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.449 [2024-11-29 13:13:15.212156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.449 [2024-11-29 13:13:15.212163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.449 [2024-11-29 13:13:15.212169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.449 [2024-11-29 13:13:15.212183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.449 qpair failed and we were unable to recover it. 00:29:15.449 [2024-11-29 13:13:15.222147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.449 [2024-11-29 13:13:15.222219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.449 [2024-11-29 13:13:15.222237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.449 [2024-11-29 13:13:15.222244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.449 [2024-11-29 13:13:15.222250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.449 [2024-11-29 13:13:15.222265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.449 qpair failed and we were unable to recover it. 00:29:15.449 [2024-11-29 13:13:15.232180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.450 [2024-11-29 13:13:15.232235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.450 [2024-11-29 13:13:15.232248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.450 [2024-11-29 13:13:15.232255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.450 [2024-11-29 13:13:15.232261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.450 [2024-11-29 13:13:15.232276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.450 qpair failed and we were unable to recover it. 00:29:15.450 [2024-11-29 13:13:15.242207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.450 [2024-11-29 13:13:15.242272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.450 [2024-11-29 13:13:15.242286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.450 [2024-11-29 13:13:15.242293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.450 [2024-11-29 13:13:15.242299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.450 [2024-11-29 13:13:15.242314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.450 qpair failed and we were unable to recover it. 00:29:15.450 [2024-11-29 13:13:15.252199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.450 [2024-11-29 13:13:15.252261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.450 [2024-11-29 13:13:15.252275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.450 [2024-11-29 13:13:15.252281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.450 [2024-11-29 13:13:15.252293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.450 [2024-11-29 13:13:15.252308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.450 qpair failed and we were unable to recover it. 00:29:15.450 [2024-11-29 13:13:15.262219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.450 [2024-11-29 13:13:15.262279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.450 [2024-11-29 13:13:15.262292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.450 [2024-11-29 13:13:15.262299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.450 [2024-11-29 13:13:15.262305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.450 [2024-11-29 13:13:15.262320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.450 qpair failed and we were unable to recover it. 00:29:15.711 [2024-11-29 13:13:15.272248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.711 [2024-11-29 13:13:15.272304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.711 [2024-11-29 13:13:15.272318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.711 [2024-11-29 13:13:15.272324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.711 [2024-11-29 13:13:15.272330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.711 [2024-11-29 13:13:15.272345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-11-29 13:13:15.282283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.711 [2024-11-29 13:13:15.282338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.711 [2024-11-29 13:13:15.282351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.711 [2024-11-29 13:13:15.282358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.711 [2024-11-29 13:13:15.282364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.711 [2024-11-29 13:13:15.282379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-11-29 13:13:15.292318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.711 [2024-11-29 13:13:15.292376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.711 [2024-11-29 13:13:15.292390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.711 [2024-11-29 13:13:15.292397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.711 [2024-11-29 13:13:15.292403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.711 [2024-11-29 13:13:15.292418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-11-29 13:13:15.302346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.711 [2024-11-29 13:13:15.302407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.711 [2024-11-29 13:13:15.302421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.711 [2024-11-29 13:13:15.302427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.711 [2024-11-29 13:13:15.302433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.711 [2024-11-29 13:13:15.302448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.711 [2024-11-29 13:13:15.312372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.711 [2024-11-29 13:13:15.312429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.711 [2024-11-29 13:13:15.312442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.711 [2024-11-29 13:13:15.312449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.711 [2024-11-29 13:13:15.312455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.711 [2024-11-29 13:13:15.312470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.711 qpair failed and we were unable to recover it. 00:29:15.712 [2024-11-29 13:13:15.322421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.712 [2024-11-29 13:13:15.322490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.712 [2024-11-29 13:13:15.322508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.712 [2024-11-29 13:13:15.322515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.712 [2024-11-29 13:13:15.322521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.712 [2024-11-29 13:13:15.322537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-11-29 13:13:15.332408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.712 [2024-11-29 13:13:15.332467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.712 [2024-11-29 13:13:15.332480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.712 [2024-11-29 13:13:15.332488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.712 [2024-11-29 13:13:15.332493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.712 [2024-11-29 13:13:15.332508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-11-29 13:13:15.342494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.712 [2024-11-29 13:13:15.342553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.712 [2024-11-29 13:13:15.342571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.712 [2024-11-29 13:13:15.342578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.712 [2024-11-29 13:13:15.342584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.712 [2024-11-29 13:13:15.342599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-11-29 13:13:15.352526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.712 [2024-11-29 13:13:15.352635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.712 [2024-11-29 13:13:15.352649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.712 [2024-11-29 13:13:15.352656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.712 [2024-11-29 13:13:15.352662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.712 [2024-11-29 13:13:15.352676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-11-29 13:13:15.362552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.712 [2024-11-29 13:13:15.362608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.712 [2024-11-29 13:13:15.362623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.712 [2024-11-29 13:13:15.362629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.712 [2024-11-29 13:13:15.362635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.712 [2024-11-29 13:13:15.362651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-11-29 13:13:15.372577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.712 [2024-11-29 13:13:15.372635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.712 [2024-11-29 13:13:15.372648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.712 [2024-11-29 13:13:15.372655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.712 [2024-11-29 13:13:15.372661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.712 [2024-11-29 13:13:15.372675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-11-29 13:13:15.382579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.712 [2024-11-29 13:13:15.382640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.712 [2024-11-29 13:13:15.382654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.712 [2024-11-29 13:13:15.382663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.712 [2024-11-29 13:13:15.382669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.712 [2024-11-29 13:13:15.382684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-11-29 13:13:15.392592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.712 [2024-11-29 13:13:15.392649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.712 [2024-11-29 13:13:15.392663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.712 [2024-11-29 13:13:15.392670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.712 [2024-11-29 13:13:15.392676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.712 [2024-11-29 13:13:15.392691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-11-29 13:13:15.402636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.712 [2024-11-29 13:13:15.402693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.712 [2024-11-29 13:13:15.402706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.712 [2024-11-29 13:13:15.402714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.712 [2024-11-29 13:13:15.402720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.712 [2024-11-29 13:13:15.402735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.712 [2024-11-29 13:13:15.412671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.712 [2024-11-29 13:13:15.412728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.712 [2024-11-29 13:13:15.412742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.712 [2024-11-29 13:13:15.412749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.712 [2024-11-29 13:13:15.412755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.712 [2024-11-29 13:13:15.412769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.712 qpair failed and we were unable to recover it. 00:29:15.713 [2024-11-29 13:13:15.422702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.713 [2024-11-29 13:13:15.422812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.713 [2024-11-29 13:13:15.422827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.713 [2024-11-29 13:13:15.422834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.713 [2024-11-29 13:13:15.422840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.713 [2024-11-29 13:13:15.422859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-11-29 13:13:15.432714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.713 [2024-11-29 13:13:15.432767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.713 [2024-11-29 13:13:15.432782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.713 [2024-11-29 13:13:15.432788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.713 [2024-11-29 13:13:15.432794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.713 [2024-11-29 13:13:15.432809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-11-29 13:13:15.442766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.713 [2024-11-29 13:13:15.442821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.713 [2024-11-29 13:13:15.442835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.713 [2024-11-29 13:13:15.442842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.713 [2024-11-29 13:13:15.442848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.713 [2024-11-29 13:13:15.442863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-11-29 13:13:15.452775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.713 [2024-11-29 13:13:15.452874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.713 [2024-11-29 13:13:15.452889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.713 [2024-11-29 13:13:15.452896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.713 [2024-11-29 13:13:15.452902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.713 [2024-11-29 13:13:15.452917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-11-29 13:13:15.462801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.713 [2024-11-29 13:13:15.462862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.713 [2024-11-29 13:13:15.462876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.713 [2024-11-29 13:13:15.462883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.713 [2024-11-29 13:13:15.462888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.713 [2024-11-29 13:13:15.462904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-11-29 13:13:15.472821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.713 [2024-11-29 13:13:15.472906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.713 [2024-11-29 13:13:15.472921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.713 [2024-11-29 13:13:15.472927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.713 [2024-11-29 13:13:15.472934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.713 [2024-11-29 13:13:15.472952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-11-29 13:13:15.482786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.713 [2024-11-29 13:13:15.482840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.713 [2024-11-29 13:13:15.482855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.713 [2024-11-29 13:13:15.482861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.713 [2024-11-29 13:13:15.482867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.713 [2024-11-29 13:13:15.482882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-11-29 13:13:15.492930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.713 [2024-11-29 13:13:15.493039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.713 [2024-11-29 13:13:15.493055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.713 [2024-11-29 13:13:15.493061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.713 [2024-11-29 13:13:15.493067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.713 [2024-11-29 13:13:15.493083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-11-29 13:13:15.502944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.713 [2024-11-29 13:13:15.503024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.713 [2024-11-29 13:13:15.503039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.713 [2024-11-29 13:13:15.503046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.713 [2024-11-29 13:13:15.503052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.713 [2024-11-29 13:13:15.503068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.713 qpair failed and we were unable to recover it. 00:29:15.713 [2024-11-29 13:13:15.512887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.713 [2024-11-29 13:13:15.513009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.713 [2024-11-29 13:13:15.513024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.714 [2024-11-29 13:13:15.513034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.714 [2024-11-29 13:13:15.513040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.714 [2024-11-29 13:13:15.513056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.714 [2024-11-29 13:13:15.522971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.714 [2024-11-29 13:13:15.523032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.714 [2024-11-29 13:13:15.523046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.714 [2024-11-29 13:13:15.523053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.714 [2024-11-29 13:13:15.523059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.714 [2024-11-29 13:13:15.523074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.714 qpair failed and we were unable to recover it. 00:29:15.974 [2024-11-29 13:13:15.533017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.974 [2024-11-29 13:13:15.533074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.974 [2024-11-29 13:13:15.533088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.974 [2024-11-29 13:13:15.533098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.974 [2024-11-29 13:13:15.533104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.974 [2024-11-29 13:13:15.533119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.974 qpair failed and we were unable to recover it. 00:29:15.974 [2024-11-29 13:13:15.543056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.974 [2024-11-29 13:13:15.543137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.974 [2024-11-29 13:13:15.543153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.974 [2024-11-29 13:13:15.543160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.974 [2024-11-29 13:13:15.543166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.974 [2024-11-29 13:13:15.543182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.975 qpair failed and we were unable to recover it. 00:29:15.975 [2024-11-29 13:13:15.553027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.975 [2024-11-29 13:13:15.553098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.975 [2024-11-29 13:13:15.553112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.975 [2024-11-29 13:13:15.553120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.975 [2024-11-29 13:13:15.553126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.975 [2024-11-29 13:13:15.553145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.975 qpair failed and we were unable to recover it. 00:29:15.975 [2024-11-29 13:13:15.563118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.975 [2024-11-29 13:13:15.563179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.975 [2024-11-29 13:13:15.563192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.975 [2024-11-29 13:13:15.563198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.975 [2024-11-29 13:13:15.563204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.975 [2024-11-29 13:13:15.563219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.975 qpair failed and we were unable to recover it. 00:29:15.975 [2024-11-29 13:13:15.573116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.975 [2024-11-29 13:13:15.573175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.975 [2024-11-29 13:13:15.573189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.975 [2024-11-29 13:13:15.573195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.975 [2024-11-29 13:13:15.573201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.975 [2024-11-29 13:13:15.573216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.975 qpair failed and we were unable to recover it. 00:29:15.975 [2024-11-29 13:13:15.583084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.975 [2024-11-29 13:13:15.583144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.975 [2024-11-29 13:13:15.583159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.975 [2024-11-29 13:13:15.583165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.975 [2024-11-29 13:13:15.583172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.975 [2024-11-29 13:13:15.583187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.975 qpair failed and we were unable to recover it. 00:29:15.975 [2024-11-29 13:13:15.593210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.975 [2024-11-29 13:13:15.593288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.975 [2024-11-29 13:13:15.593302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.975 [2024-11-29 13:13:15.593309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.975 [2024-11-29 13:13:15.593315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.975 [2024-11-29 13:13:15.593331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.975 qpair failed and we were unable to recover it. 00:29:15.975 [2024-11-29 13:13:15.603207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.975 [2024-11-29 13:13:15.603260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.975 [2024-11-29 13:13:15.603274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.975 [2024-11-29 13:13:15.603281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.975 [2024-11-29 13:13:15.603287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.975 [2024-11-29 13:13:15.603302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.975 qpair failed and we were unable to recover it. 00:29:15.975 [2024-11-29 13:13:15.613172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.975 [2024-11-29 13:13:15.613231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.975 [2024-11-29 13:13:15.613245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.975 [2024-11-29 13:13:15.613251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.975 [2024-11-29 13:13:15.613257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.975 [2024-11-29 13:13:15.613272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.975 qpair failed and we were unable to recover it. 00:29:15.975 [2024-11-29 13:13:15.623300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.975 [2024-11-29 13:13:15.623358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.975 [2024-11-29 13:13:15.623372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.975 [2024-11-29 13:13:15.623379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.975 [2024-11-29 13:13:15.623384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.975 [2024-11-29 13:13:15.623400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.975 qpair failed and we were unable to recover it. 00:29:15.975 [2024-11-29 13:13:15.633322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.975 [2024-11-29 13:13:15.633378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.975 [2024-11-29 13:13:15.633392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.975 [2024-11-29 13:13:15.633399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.975 [2024-11-29 13:13:15.633404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.975 [2024-11-29 13:13:15.633419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.975 qpair failed and we were unable to recover it. 00:29:15.975 [2024-11-29 13:13:15.643380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.975 [2024-11-29 13:13:15.643487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.975 [2024-11-29 13:13:15.643505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.975 [2024-11-29 13:13:15.643512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.975 [2024-11-29 13:13:15.643518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.975 [2024-11-29 13:13:15.643534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.975 qpair failed and we were unable to recover it. 00:29:15.975 [2024-11-29 13:13:15.653390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.975 [2024-11-29 13:13:15.653501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-11-29 13:13:15.653516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-11-29 13:13:15.653522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-11-29 13:13:15.653529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.976 [2024-11-29 13:13:15.653544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-11-29 13:13:15.663378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-11-29 13:13:15.663434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-11-29 13:13:15.663448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-11-29 13:13:15.663455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-11-29 13:13:15.663461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.976 [2024-11-29 13:13:15.663476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-11-29 13:13:15.673342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-11-29 13:13:15.673400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-11-29 13:13:15.673414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-11-29 13:13:15.673420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-11-29 13:13:15.673426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.976 [2024-11-29 13:13:15.673441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-11-29 13:13:15.683368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-11-29 13:13:15.683453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-11-29 13:13:15.683468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-11-29 13:13:15.683475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-11-29 13:13:15.683485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.976 [2024-11-29 13:13:15.683500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-11-29 13:13:15.693496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-11-29 13:13:15.693561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-11-29 13:13:15.693574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-11-29 13:13:15.693581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-11-29 13:13:15.693587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.976 [2024-11-29 13:13:15.693602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-11-29 13:13:15.703423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-11-29 13:13:15.703483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-11-29 13:13:15.703497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-11-29 13:13:15.703504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-11-29 13:13:15.703510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.976 [2024-11-29 13:13:15.703525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-11-29 13:13:15.713487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-11-29 13:13:15.713542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-11-29 13:13:15.713556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-11-29 13:13:15.713563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-11-29 13:13:15.713568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.976 [2024-11-29 13:13:15.713583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-11-29 13:13:15.723531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-11-29 13:13:15.723590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-11-29 13:13:15.723604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-11-29 13:13:15.723610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-11-29 13:13:15.723616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.976 [2024-11-29 13:13:15.723631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-11-29 13:13:15.733527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-11-29 13:13:15.733584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-11-29 13:13:15.733597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-11-29 13:13:15.733604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-11-29 13:13:15.733610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.976 [2024-11-29 13:13:15.733625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-11-29 13:13:15.743619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-11-29 13:13:15.743679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-11-29 13:13:15.743692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-11-29 13:13:15.743698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-11-29 13:13:15.743704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.976 [2024-11-29 13:13:15.743719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-11-29 13:13:15.753687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-11-29 13:13:15.753754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.976 [2024-11-29 13:13:15.753768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.976 [2024-11-29 13:13:15.753775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.976 [2024-11-29 13:13:15.753781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.976 [2024-11-29 13:13:15.753795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.976 qpair failed and we were unable to recover it. 00:29:15.976 [2024-11-29 13:13:15.763701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.976 [2024-11-29 13:13:15.763754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.977 [2024-11-29 13:13:15.763768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.977 [2024-11-29 13:13:15.763775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.977 [2024-11-29 13:13:15.763781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.977 [2024-11-29 13:13:15.763796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.977 qpair failed and we were unable to recover it. 00:29:15.977 [2024-11-29 13:13:15.773717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.977 [2024-11-29 13:13:15.773787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.977 [2024-11-29 13:13:15.773828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.977 [2024-11-29 13:13:15.773834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.977 [2024-11-29 13:13:15.773840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.977 [2024-11-29 13:13:15.773856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.977 qpair failed and we were unable to recover it. 00:29:15.977 [2024-11-29 13:13:15.783738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.977 [2024-11-29 13:13:15.783793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.977 [2024-11-29 13:13:15.783806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.977 [2024-11-29 13:13:15.783813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.977 [2024-11-29 13:13:15.783819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:15.977 [2024-11-29 13:13:15.783833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.977 qpair failed and we were unable to recover it. 00:29:16.237 [2024-11-29 13:13:15.793748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.237 [2024-11-29 13:13:15.793806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.237 [2024-11-29 13:13:15.793820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.237 [2024-11-29 13:13:15.793826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.237 [2024-11-29 13:13:15.793832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.237 [2024-11-29 13:13:15.793847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.237 qpair failed and we were unable to recover it. 00:29:16.237 [2024-11-29 13:13:15.803785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.237 [2024-11-29 13:13:15.803840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.237 [2024-11-29 13:13:15.803853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.237 [2024-11-29 13:13:15.803860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.237 [2024-11-29 13:13:15.803866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.237 [2024-11-29 13:13:15.803881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.237 qpair failed and we were unable to recover it. 00:29:16.237 [2024-11-29 13:13:15.813819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.237 [2024-11-29 13:13:15.813879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.237 [2024-11-29 13:13:15.813892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.237 [2024-11-29 13:13:15.813898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.237 [2024-11-29 13:13:15.813908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.237 [2024-11-29 13:13:15.813922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.237 qpair failed and we were unable to recover it. 00:29:16.237 [2024-11-29 13:13:15.823856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-11-29 13:13:15.823917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-11-29 13:13:15.823931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-11-29 13:13:15.823938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-11-29 13:13:15.823944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.238 [2024-11-29 13:13:15.823963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-11-29 13:13:15.833860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-11-29 13:13:15.833913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-11-29 13:13:15.833927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-11-29 13:13:15.833933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-11-29 13:13:15.833939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.238 [2024-11-29 13:13:15.833959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-11-29 13:13:15.843822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-11-29 13:13:15.843878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-11-29 13:13:15.843891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-11-29 13:13:15.843898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-11-29 13:13:15.843904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.238 [2024-11-29 13:13:15.843918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-11-29 13:13:15.853853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-11-29 13:13:15.853912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-11-29 13:13:15.853926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-11-29 13:13:15.853932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-11-29 13:13:15.853938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.238 [2024-11-29 13:13:15.853957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-11-29 13:13:15.863884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-11-29 13:13:15.863946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-11-29 13:13:15.863964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-11-29 13:13:15.863970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-11-29 13:13:15.863976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.238 [2024-11-29 13:13:15.863991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-11-29 13:13:15.873975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-11-29 13:13:15.874028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-11-29 13:13:15.874042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-11-29 13:13:15.874049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-11-29 13:13:15.874054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.238 [2024-11-29 13:13:15.874069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-11-29 13:13:15.883988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-11-29 13:13:15.884043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-11-29 13:13:15.884057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-11-29 13:13:15.884064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-11-29 13:13:15.884069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.238 [2024-11-29 13:13:15.884085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-11-29 13:13:15.894096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-11-29 13:13:15.894156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-11-29 13:13:15.894171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-11-29 13:13:15.894177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-11-29 13:13:15.894184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.238 [2024-11-29 13:13:15.894199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-11-29 13:13:15.904017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-11-29 13:13:15.904071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-11-29 13:13:15.904088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-11-29 13:13:15.904095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-11-29 13:13:15.904101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.238 [2024-11-29 13:13:15.904116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-11-29 13:13:15.914023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.238 [2024-11-29 13:13:15.914083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.238 [2024-11-29 13:13:15.914097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.238 [2024-11-29 13:13:15.914104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.238 [2024-11-29 13:13:15.914110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.238 [2024-11-29 13:13:15.914125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.238 qpair failed and we were unable to recover it. 00:29:16.238 [2024-11-29 13:13:15.924136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.239 [2024-11-29 13:13:15.924194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.239 [2024-11-29 13:13:15.924208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.239 [2024-11-29 13:13:15.924214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.239 [2024-11-29 13:13:15.924220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.239 [2024-11-29 13:13:15.924235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.239 qpair failed and we were unable to recover it. 00:29:16.239 [2024-11-29 13:13:15.934153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.239 [2024-11-29 13:13:15.934209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.239 [2024-11-29 13:13:15.934223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.239 [2024-11-29 13:13:15.934230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.239 [2024-11-29 13:13:15.934235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f838c000b90 00:29:16.239 [2024-11-29 13:13:15.934250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.239 qpair failed and we were unable to recover it. 00:29:16.239 [2024-11-29 13:13:15.934419] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:16.239 A controller has encountered a failure and is being reset. 00:29:16.239 [2024-11-29 13:13:15.944195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.239 [2024-11-29 13:13:15.944279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.239 [2024-11-29 13:13:15.944310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.239 [2024-11-29 13:13:15.944323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.239 [2024-11-29 13:13:15.944333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8380000b90 00:29:16.239 [2024-11-29 13:13:15.944357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:16.239 qpair failed and we were unable to recover it. 00:29:16.239 [2024-11-29 13:13:15.954232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.239 [2024-11-29 13:13:15.954298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.239 [2024-11-29 13:13:15.954313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.239 [2024-11-29 13:13:15.954321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.239 [2024-11-29 13:13:15.954327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8380000b90 00:29:16.239 [2024-11-29 13:13:15.954342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:16.239 qpair failed and we were unable to recover it. 00:29:16.239 [2024-11-29 13:13:15.964298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.239 [2024-11-29 13:13:15.964354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.239 [2024-11-29 13:13:15.964368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.239 [2024-11-29 13:13:15.964375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.239 [2024-11-29 13:13:15.964381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8380000b90 00:29:16.239 [2024-11-29 13:13:15.964397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:16.239 qpair failed and we were unable to recover it. 00:29:16.239 [2024-11-29 13:13:15.974282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.239 [2024-11-29 13:13:15.974340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.239 [2024-11-29 13:13:15.974354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.239 [2024-11-29 13:13:15.974362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.239 [2024-11-29 13:13:15.974368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8380000b90 00:29:16.239 [2024-11-29 13:13:15.974383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:16.239 qpair failed and we were unable to recover it. 00:29:16.239 [2024-11-29 13:13:15.984333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.239 [2024-11-29 13:13:15.984395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.239 [2024-11-29 13:13:15.984416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.239 [2024-11-29 13:13:15.984425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.239 [2024-11-29 13:13:15.984435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8384000b90 00:29:16.239 [2024-11-29 13:13:15.984454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.239 qpair failed and we were unable to recover it. 00:29:16.239 [2024-11-29 13:13:15.994382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.239 [2024-11-29 13:13:15.994458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.239 [2024-11-29 13:13:15.994473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.239 [2024-11-29 13:13:15.994480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.239 [2024-11-29 13:13:15.994487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8384000b90 00:29:16.239 [2024-11-29 13:13:15.994502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.239 qpair failed and we were unable to recover it. 00:29:16.239 Controller properly reset. 00:29:16.498 Initializing NVMe Controllers 00:29:16.498 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:16.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:16.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:16.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:16.498 Initialization complete. Launching workers. 00:29:16.498 Starting thread on core 1 00:29:16.498 Starting thread on core 2 00:29:16.498 Starting thread on core 3 00:29:16.498 Starting thread on core 0 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:16.498 00:29:16.498 real 0m11.551s 00:29:16.498 user 0m21.410s 00:29:16.498 sys 0m4.672s 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.498 ************************************ 00:29:16.498 END TEST nvmf_target_disconnect_tc2 00:29:16.498 ************************************ 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.498 rmmod nvme_tcp 00:29:16.498 rmmod nvme_fabrics 00:29:16.498 rmmod nvme_keyring 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2153465 ']' 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2153465 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2153465 ']' 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2153465 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2153465 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2153465' 00:29:16.498 killing process with pid 2153465 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2153465 00:29:16.498 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2153465 00:29:16.758 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:16.758 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:16.758 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:16.758 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:16.758 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:16.758 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:16.758 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:16.758 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.758 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:16.758 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.758 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.758 13:13:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.664 13:13:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.664 00:29:18.664 real 0m19.932s 00:29:18.664 user 0m49.531s 00:29:18.664 sys 0m9.310s 00:29:18.664 13:13:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.664 13:13:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:18.664 ************************************ 00:29:18.664 END TEST nvmf_target_disconnect 00:29:18.664 ************************************ 00:29:18.923 13:13:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:18.923 00:29:18.923 real 5m42.776s 00:29:18.923 user 10m28.680s 00:29:18.923 sys 1m51.621s 00:29:18.923 13:13:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.923 13:13:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.923 ************************************ 00:29:18.923 END TEST nvmf_host 00:29:18.923 ************************************ 00:29:18.923 13:13:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:18.923 13:13:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:18.923 13:13:18 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:18.923 13:13:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:18.923 13:13:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.923 13:13:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.923 ************************************ 00:29:18.923 START TEST nvmf_target_core_interrupt_mode 00:29:18.923 ************************************ 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:18.923 * Looking for test storage... 00:29:18.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:18.923 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:19.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.183 --rc genhtml_branch_coverage=1 00:29:19.183 --rc genhtml_function_coverage=1 00:29:19.183 --rc genhtml_legend=1 00:29:19.183 --rc geninfo_all_blocks=1 00:29:19.183 --rc geninfo_unexecuted_blocks=1 00:29:19.183 00:29:19.183 ' 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:19.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.183 --rc genhtml_branch_coverage=1 00:29:19.183 --rc genhtml_function_coverage=1 00:29:19.183 --rc genhtml_legend=1 00:29:19.183 --rc geninfo_all_blocks=1 00:29:19.183 --rc geninfo_unexecuted_blocks=1 00:29:19.183 00:29:19.183 ' 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:19.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.183 --rc genhtml_branch_coverage=1 00:29:19.183 --rc genhtml_function_coverage=1 00:29:19.183 --rc genhtml_legend=1 00:29:19.183 --rc geninfo_all_blocks=1 00:29:19.183 --rc geninfo_unexecuted_blocks=1 00:29:19.183 00:29:19.183 ' 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:19.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.183 --rc genhtml_branch_coverage=1 00:29:19.183 --rc genhtml_function_coverage=1 00:29:19.183 --rc genhtml_legend=1 00:29:19.183 --rc geninfo_all_blocks=1 00:29:19.183 --rc geninfo_unexecuted_blocks=1 00:29:19.183 00:29:19.183 ' 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.183 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:19.184 ************************************ 00:29:19.184 START TEST nvmf_abort 00:29:19.184 ************************************ 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:19.184 * Looking for test storage... 00:29:19.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:19.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.184 --rc genhtml_branch_coverage=1 00:29:19.184 --rc genhtml_function_coverage=1 00:29:19.184 --rc genhtml_legend=1 00:29:19.184 --rc geninfo_all_blocks=1 00:29:19.184 --rc geninfo_unexecuted_blocks=1 00:29:19.184 00:29:19.184 ' 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:19.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.184 --rc genhtml_branch_coverage=1 00:29:19.184 --rc genhtml_function_coverage=1 00:29:19.184 --rc genhtml_legend=1 00:29:19.184 --rc geninfo_all_blocks=1 00:29:19.184 --rc geninfo_unexecuted_blocks=1 00:29:19.184 00:29:19.184 ' 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:19.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.184 --rc genhtml_branch_coverage=1 00:29:19.184 --rc genhtml_function_coverage=1 00:29:19.184 --rc genhtml_legend=1 00:29:19.184 --rc geninfo_all_blocks=1 00:29:19.184 --rc geninfo_unexecuted_blocks=1 00:29:19.184 00:29:19.184 ' 00:29:19.184 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:19.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.184 --rc genhtml_branch_coverage=1 00:29:19.184 --rc genhtml_function_coverage=1 00:29:19.184 --rc genhtml_legend=1 00:29:19.184 --rc geninfo_all_blocks=1 00:29:19.185 --rc geninfo_unexecuted_blocks=1 00:29:19.185 00:29:19.185 ' 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.185 13:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.185 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.185 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.185 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.185 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:19.185 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.185 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:19.185 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:19.185 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:19.185 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.185 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:19.444 13:13:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:24.712 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:24.712 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:24.712 Found net devices under 0000:86:00.0: cvl_0_0 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.712 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:24.712 Found net devices under 0000:86:00.1: cvl_0_1 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:29:24.713 00:29:24.713 --- 10.0.0.2 ping statistics --- 00:29:24.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.713 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:29:24.713 00:29:24.713 --- 10.0.0.1 ping statistics --- 00:29:24.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.713 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2158213 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2158213 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2158213 ']' 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.713 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.713 [2024-11-29 13:13:24.425315] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:24.713 [2024-11-29 13:13:24.426242] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:29:24.713 [2024-11-29 13:13:24.426275] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.713 [2024-11-29 13:13:24.492298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:24.972 [2024-11-29 13:13:24.534460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.972 [2024-11-29 13:13:24.534491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.972 [2024-11-29 13:13:24.534499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.972 [2024-11-29 13:13:24.534505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.972 [2024-11-29 13:13:24.534510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.972 [2024-11-29 13:13:24.535849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.972 [2024-11-29 13:13:24.535869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.972 [2024-11-29 13:13:24.535871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.972 [2024-11-29 13:13:24.603293] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:24.972 [2024-11-29 13:13:24.603317] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:24.972 [2024-11-29 13:13:24.603552] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:24.972 [2024-11-29 13:13:24.603614] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.972 [2024-11-29 13:13:24.668604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.972 Malloc0 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.972 Delay0 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.972 [2024-11-29 13:13:24.752546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.972 13:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:25.230 [2024-11-29 13:13:24.868797] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:27.134 Initializing NVMe Controllers 00:29:27.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:27.134 controller IO queue size 128 less than required 00:29:27.134 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:27.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:27.135 Initialization complete. Launching workers. 00:29:27.135 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36945 00:29:27.135 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37002, failed to submit 66 00:29:27.135 success 36945, unsuccessful 57, failed 0 00:29:27.135 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:27.135 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.135 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:27.135 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.135 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:27.135 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:27.135 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:27.135 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:27.135 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.135 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:27.135 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.135 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.135 rmmod nvme_tcp 00:29:27.135 rmmod nvme_fabrics 00:29:27.393 rmmod nvme_keyring 00:29:27.393 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.393 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:27.393 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:27.393 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2158213 ']' 00:29:27.393 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2158213 00:29:27.393 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2158213 ']' 00:29:27.393 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2158213 00:29:27.393 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:29:27.393 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.393 13:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2158213 00:29:27.393 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:27.393 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:27.393 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2158213' 00:29:27.393 killing process with pid 2158213 00:29:27.393 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2158213 00:29:27.393 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2158213 00:29:27.652 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:27.652 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:27.652 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:27.652 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:27.652 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:27.652 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:27.652 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:27.652 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.652 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.652 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.652 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.652 13:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.555 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.555 00:29:29.555 real 0m10.474s 00:29:29.555 user 0m9.932s 00:29:29.555 sys 0m5.226s 00:29:29.555 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.555 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.555 ************************************ 00:29:29.555 END TEST nvmf_abort 00:29:29.555 ************************************ 00:29:29.555 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:29.555 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:29.555 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.555 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:29.555 ************************************ 00:29:29.555 START TEST nvmf_ns_hotplug_stress 00:29:29.555 ************************************ 00:29:29.555 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:29.905 * Looking for test storage... 00:29:29.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.905 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:29.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.906 --rc genhtml_branch_coverage=1 00:29:29.906 --rc genhtml_function_coverage=1 00:29:29.906 --rc genhtml_legend=1 00:29:29.906 --rc geninfo_all_blocks=1 00:29:29.906 --rc geninfo_unexecuted_blocks=1 00:29:29.906 00:29:29.906 ' 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:29.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.906 --rc genhtml_branch_coverage=1 00:29:29.906 --rc genhtml_function_coverage=1 00:29:29.906 --rc genhtml_legend=1 00:29:29.906 --rc geninfo_all_blocks=1 00:29:29.906 --rc geninfo_unexecuted_blocks=1 00:29:29.906 00:29:29.906 ' 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:29.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.906 --rc genhtml_branch_coverage=1 00:29:29.906 --rc genhtml_function_coverage=1 00:29:29.906 --rc genhtml_legend=1 00:29:29.906 --rc geninfo_all_blocks=1 00:29:29.906 --rc geninfo_unexecuted_blocks=1 00:29:29.906 00:29:29.906 ' 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:29.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.906 --rc genhtml_branch_coverage=1 00:29:29.906 --rc genhtml_function_coverage=1 00:29:29.906 --rc genhtml_legend=1 00:29:29.906 --rc geninfo_all_blocks=1 00:29:29.906 --rc geninfo_unexecuted_blocks=1 00:29:29.906 00:29:29.906 ' 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:29.906 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:29.907 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.907 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.907 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.907 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:29.907 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:29.907 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.907 13:13:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:35.237 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:35.238 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:35.238 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:35.238 Found net devices under 0000:86:00.0: cvl_0_0 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:35.238 Found net devices under 0000:86:00.1: cvl_0_1 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:35.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:29:35.238 00:29:35.238 --- 10.0.0.2 ping statistics --- 00:29:35.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.238 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:29:35.238 00:29:35.238 --- 10.0.0.1 ping statistics --- 00:29:35.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.238 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2161989 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2161989 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2161989 ']' 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.238 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.239 13:13:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:35.239 [2024-11-29 13:13:34.991468] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:35.239 [2024-11-29 13:13:34.992389] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:29:35.239 [2024-11-29 13:13:34.992424] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.496 [2024-11-29 13:13:35.057066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:35.496 [2024-11-29 13:13:35.099176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.496 [2024-11-29 13:13:35.099214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.496 [2024-11-29 13:13:35.099220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.496 [2024-11-29 13:13:35.099226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.496 [2024-11-29 13:13:35.099231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.496 [2024-11-29 13:13:35.100646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.496 [2024-11-29 13:13:35.100733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:35.496 [2024-11-29 13:13:35.100733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.496 [2024-11-29 13:13:35.168255] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:35.496 [2024-11-29 13:13:35.168268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:35.496 [2024-11-29 13:13:35.168470] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:35.496 [2024-11-29 13:13:35.168544] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:35.496 13:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.496 13:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:35.496 13:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:35.496 13:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:35.496 13:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:35.496 13:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.496 13:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:35.496 13:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:35.752 [2024-11-29 13:13:35.405365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.753 13:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:36.010 13:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.010 [2024-11-29 13:13:35.793680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.010 13:13:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:36.269 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:36.527 Malloc0 00:29:36.527 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:36.785 Delay0 00:29:36.785 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.044 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:37.044 NULL1 00:29:37.044 13:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:37.302 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2162365 00:29:37.302 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:37.302 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:37.302 13:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.677 Read completed with error (sct=0, sc=11) 00:29:38.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:38.677 13:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:38.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:38.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:38.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:38.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:38.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:38.677 13:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:38.677 13:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:38.936 true 00:29:38.936 13:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:38.936 13:13:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.873 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:39.873 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:39.873 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:40.132 true 00:29:40.132 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:40.132 13:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.390 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:40.649 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:40.649 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:40.649 true 00:29:40.908 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:40.908 13:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:41.844 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:41.844 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:41.844 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:42.103 true 00:29:42.103 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:42.103 13:13:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.361 13:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.620 13:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:42.620 13:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:42.620 true 00:29:42.879 13:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:42.879 13:13:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:43.813 13:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.072 13:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:44.072 13:13:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:44.331 true 00:29:44.331 13:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:44.331 13:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.266 13:13:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.266 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:45.266 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:45.523 true 00:29:45.523 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:45.523 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.781 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.039 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:46.039 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:46.039 true 00:29:46.298 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:46.298 13:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.232 13:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.491 13:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:47.491 13:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:47.750 true 00:29:47.750 13:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:47.750 13:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.685 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.685 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:48.685 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:48.944 true 00:29:48.944 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:48.944 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.203 13:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.462 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:49.462 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:49.462 true 00:29:49.462 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:49.462 13:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.839 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.839 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:50.839 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:51.097 true 00:29:51.097 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:51.097 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.355 13:13:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.355 13:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:51.355 13:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:51.613 true 00:29:51.613 13:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:51.613 13:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:52.547 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.806 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:52.806 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:52.806 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:53.065 true 00:29:53.065 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:53.065 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.323 13:13:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.582 13:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:53.582 13:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:53.582 true 00:29:53.582 13:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:53.582 13:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:54.960 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:54.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:54.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:54.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:54.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:54.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:54.960 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:54.960 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:55.219 true 00:29:55.219 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:55.219 13:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.154 13:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.154 13:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:56.154 13:13:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:56.412 true 00:29:56.412 13:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:56.412 13:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.670 13:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.928 13:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:56.928 13:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:57.187 true 00:29:57.187 13:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:57.187 13:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:58.121 13:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.378 13:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:58.378 13:13:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:58.378 true 00:29:58.378 13:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:58.378 13:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.636 13:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.895 13:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:58.895 13:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:59.154 true 00:29:59.154 13:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:29:59.154 13:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.091 13:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.350 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:00.350 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:00.608 true 00:30:00.609 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:30:00.609 13:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.543 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.543 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:01.544 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:01.802 true 00:30:01.802 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:30:01.802 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.061 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.320 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:02.320 13:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:02.320 true 00:30:02.320 13:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:30:02.320 13:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:03.704 13:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:03.704 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:03.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:03.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:03.705 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:03.705 13:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:03.705 13:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:03.964 true 00:30:03.964 13:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:30:03.964 13:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:04.902 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.902 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:04.902 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:05.160 true 00:30:05.160 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:30:05.160 13:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.419 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.419 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:05.419 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:05.680 true 00:30:05.681 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:30:05.681 13:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.058 13:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:07.058 13:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:07.058 13:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:07.316 true 00:30:07.316 13:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:30:07.316 13:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.253 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.253 Initializing NVMe Controllers 00:30:08.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.253 Controller IO queue size 128, less than required. 00:30:08.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.253 Controller IO queue size 128, less than required. 00:30:08.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:08.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:08.253 Initialization complete. Launching workers. 00:30:08.253 ======================================================== 00:30:08.253 Latency(us) 00:30:08.253 Device Information : IOPS MiB/s Average min max 00:30:08.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1860.83 0.91 47067.85 2741.45 1013955.53 00:30:08.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17050.20 8.33 7507.21 1206.58 380859.87 00:30:08.253 ======================================================== 00:30:08.253 Total : 18911.03 9.23 11399.95 1206.58 1013955.53 00:30:08.253 00:30:08.253 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:08.253 13:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:08.512 true 00:30:08.512 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2162365 00:30:08.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2162365) - No such process 00:30:08.512 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2162365 00:30:08.512 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.770 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:09.029 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:09.029 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:09.029 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:09.029 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:09.029 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:09.029 null0 00:30:09.029 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:09.029 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:09.029 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:09.288 null1 00:30:09.288 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:09.288 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:09.288 13:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:09.547 null2 00:30:09.547 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:09.547 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:09.547 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:09.547 null3 00:30:09.547 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:09.547 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:09.547 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:09.818 null4 00:30:09.818 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:09.818 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:09.818 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:10.077 null5 00:30:10.077 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:10.077 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:10.077 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:10.337 null6 00:30:10.337 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:10.337 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:10.337 13:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:10.337 null7 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2167802 2167804 2167806 2167807 2167809 2167811 2167814 2167819 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.337 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:10.596 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:10.596 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.596 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:10.596 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:10.596 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:10.596 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:10.596 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:10.596 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.856 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.115 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.374 13:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:11.374 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:11.374 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:11.374 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.374 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:11.374 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:11.374 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:11.374 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:11.374 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.634 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:11.893 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:11.893 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:11.893 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:11.893 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:11.893 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.893 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:11.893 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:11.893 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:12.153 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:12.413 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:12.413 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:12.413 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.413 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:12.413 13:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:12.413 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:12.413 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:12.413 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.413 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.413 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:12.413 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.413 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.413 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:12.413 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.413 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.414 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:12.673 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:12.673 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:12.673 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:12.673 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:12.673 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:12.673 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.673 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:12.673 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.933 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:13.193 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:13.193 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:13.193 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:13.193 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:13.193 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:13.193 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:13.193 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:13.193 13:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.193 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.193 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.193 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.452 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:13.453 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.711 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:13.970 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:13.970 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:13.970 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:13.970 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.970 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:13.970 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:13.970 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:13.970 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.230 13:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:14.230 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.489 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.490 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.490 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:14.490 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:14.490 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:14.490 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:14.490 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.490 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:14.490 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.490 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.490 rmmod nvme_tcp 00:30:14.749 rmmod nvme_fabrics 00:30:14.749 rmmod nvme_keyring 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2161989 ']' 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2161989 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2161989 ']' 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2161989 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2161989 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2161989' 00:30:14.749 killing process with pid 2161989 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2161989 00:30:14.749 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2161989 00:30:15.008 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:15.008 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:15.008 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:15.008 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:15.008 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:15.008 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:15.008 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:15.008 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:15.008 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:15.008 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.008 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.008 13:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.914 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:16.914 00:30:16.914 real 0m47.295s 00:30:16.914 user 2m58.491s 00:30:16.914 sys 0m19.941s 00:30:16.914 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:16.914 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:16.914 ************************************ 00:30:16.914 END TEST nvmf_ns_hotplug_stress 00:30:16.914 ************************************ 00:30:16.914 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:16.914 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:16.914 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:16.914 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:16.914 ************************************ 00:30:16.914 START TEST nvmf_delete_subsystem 00:30:16.914 ************************************ 00:30:16.914 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:17.174 * Looking for test storage... 00:30:17.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.174 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.175 --rc genhtml_branch_coverage=1 00:30:17.175 --rc genhtml_function_coverage=1 00:30:17.175 --rc genhtml_legend=1 00:30:17.175 --rc geninfo_all_blocks=1 00:30:17.175 --rc geninfo_unexecuted_blocks=1 00:30:17.175 00:30:17.175 ' 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.175 --rc genhtml_branch_coverage=1 00:30:17.175 --rc genhtml_function_coverage=1 00:30:17.175 --rc genhtml_legend=1 00:30:17.175 --rc geninfo_all_blocks=1 00:30:17.175 --rc geninfo_unexecuted_blocks=1 00:30:17.175 00:30:17.175 ' 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.175 --rc genhtml_branch_coverage=1 00:30:17.175 --rc genhtml_function_coverage=1 00:30:17.175 --rc genhtml_legend=1 00:30:17.175 --rc geninfo_all_blocks=1 00:30:17.175 --rc geninfo_unexecuted_blocks=1 00:30:17.175 00:30:17.175 ' 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:17.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.175 --rc genhtml_branch_coverage=1 00:30:17.175 --rc genhtml_function_coverage=1 00:30:17.175 --rc genhtml_legend=1 00:30:17.175 --rc geninfo_all_blocks=1 00:30:17.175 --rc geninfo_unexecuted_blocks=1 00:30:17.175 00:30:17.175 ' 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:17.175 13:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:22.452 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:22.453 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:22.453 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:22.453 Found net devices under 0000:86:00.0: cvl_0_0 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:22.453 Found net devices under 0000:86:00.1: cvl_0_1 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:22.453 13:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:22.453 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:22.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:30:22.454 00:30:22.454 --- 10.0.0.2 ping statistics --- 00:30:22.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.454 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:22.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:30:22.454 00:30:22.454 --- 10.0.0.1 ping statistics --- 00:30:22.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.454 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2171948 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2171948 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2171948 ']' 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.454 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:22.454 [2024-11-29 13:14:22.153789] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:22.454 [2024-11-29 13:14:22.154736] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:30:22.454 [2024-11-29 13:14:22.154772] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.454 [2024-11-29 13:14:22.221274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:22.454 [2024-11-29 13:14:22.263025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.454 [2024-11-29 13:14:22.263061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.454 [2024-11-29 13:14:22.263068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.454 [2024-11-29 13:14:22.263074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.454 [2024-11-29 13:14:22.263079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.454 [2024-11-29 13:14:22.264283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.454 [2024-11-29 13:14:22.264288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.714 [2024-11-29 13:14:22.333367] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:22.714 [2024-11-29 13:14:22.333717] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:22.714 [2024-11-29 13:14:22.333752] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:22.714 [2024-11-29 13:14:22.396785] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:22.714 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:22.715 [2024-11-29 13:14:22.412991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:22.715 NULL1 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:22.715 Delay0 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2171990 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:22.715 13:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:22.715 [2024-11-29 13:14:22.494707] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:25.250 13:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:25.250 13:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.250 13:14:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 Read completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.250 Write completed with error (sct=0, sc=8) 00:30:25.250 starting I/O failed: -6 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 starting I/O failed: -6 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 starting I/O failed: -6 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 starting I/O failed: -6 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 starting I/O failed: -6 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 starting I/O failed: -6 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 starting I/O failed: -6 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 starting I/O failed: -6 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 starting I/O failed: -6 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 starting I/O failed: -6 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 [2024-11-29 13:14:24.622802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f17f000d020 is same with the state(6) to be set 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Write completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.251 Read completed with error (sct=0, sc=8) 00:30:25.818 [2024-11-29 13:14:25.589654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e69b0 is same with the state(6) to be set 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 [2024-11-29 13:14:25.623552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f17f000d350 is same with the state(6) to be set 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 [2024-11-29 13:14:25.624832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e52c0 is same with the state(6) to be set 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 [2024-11-29 13:14:25.624997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e54a0 is same with the state(6) to be set 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.818 Read completed with error (sct=0, sc=8) 00:30:25.818 Write completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Write completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Write completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Write completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Write completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Write completed with error (sct=0, sc=8) 00:30:25.819 Write completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Write completed with error (sct=0, sc=8) 00:30:25.819 Write completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Read completed with error (sct=0, sc=8) 00:30:25.819 Write completed with error (sct=0, sc=8) 00:30:25.819 Write completed with error (sct=0, sc=8) 00:30:25.819 Write completed with error (sct=0, sc=8) 00:30:25.819 [2024-11-29 13:14:25.625824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e5860 is same with the state(6) to be set 00:30:25.819 Initializing NVMe Controllers 00:30:25.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:25.819 Controller IO queue size 128, less than required. 00:30:25.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:25.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:25.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:25.819 Initialization complete. Launching workers. 00:30:25.819 ======================================================== 00:30:25.819 Latency(us) 00:30:25.819 Device Information : IOPS MiB/s Average min max 00:30:25.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.08 0.09 949975.83 420.35 1011752.62 00:30:25.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.36 0.07 888781.13 251.40 1012874.25 00:30:25.819 ======================================================== 00:30:25.819 Total : 342.43 0.17 922748.62 251.40 1012874.25 00:30:25.819 00:30:25.819 [2024-11-29 13:14:25.626449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e69b0 (9): Bad file descriptor 00:30:25.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:25.819 13:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.819 13:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:25.819 13:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2171990 00:30:25.819 13:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:26.387 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:26.387 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2171990 00:30:26.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2171990) - No such process 00:30:26.387 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2171990 00:30:26.387 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:30:26.387 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2171990 00:30:26.387 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2171990 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:26.388 [2024-11-29 13:14:26.157283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2172656 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2172656 00:30:26.388 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:26.647 [2024-11-29 13:14:26.226554] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:26.906 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:26.906 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2172656 00:30:26.906 13:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:27.473 13:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:27.473 13:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2172656 00:30:27.473 13:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:28.042 13:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:28.042 13:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2172656 00:30:28.042 13:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:28.611 13:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:28.611 13:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2172656 00:30:28.611 13:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:29.179 13:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:29.179 13:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2172656 00:30:29.179 13:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:29.438 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:29.438 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2172656 00:30:29.438 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:29.698 Initializing NVMe Controllers 00:30:29.698 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.698 Controller IO queue size 128, less than required. 00:30:29.698 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:29.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:29.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:29.698 Initialization complete. Launching workers. 00:30:29.698 ======================================================== 00:30:29.698 Latency(us) 00:30:29.698 Device Information : IOPS MiB/s Average min max 00:30:29.698 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003453.94 1000157.02 1040972.80 00:30:29.698 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005250.71 1000166.05 1012022.44 00:30:29.698 ======================================================== 00:30:29.698 Total : 256.00 0.12 1004352.32 1000157.02 1040972.80 00:30:29.698 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2172656 00:30:29.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2172656) - No such process 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2172656 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.957 rmmod nvme_tcp 00:30:29.957 rmmod nvme_fabrics 00:30:29.957 rmmod nvme_keyring 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2171948 ']' 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2171948 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2171948 ']' 00:30:29.957 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2171948 00:30:29.958 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:29.958 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2171948 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2171948' 00:30:30.217 killing process with pid 2171948 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2171948 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2171948 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:30.217 13:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:32.756 00:30:32.756 real 0m15.332s 00:30:32.756 user 0m25.634s 00:30:32.756 sys 0m5.684s 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:32.756 ************************************ 00:30:32.756 END TEST nvmf_delete_subsystem 00:30:32.756 ************************************ 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:32.756 ************************************ 00:30:32.756 START TEST nvmf_host_management 00:30:32.756 ************************************ 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:32.756 * Looking for test storage... 00:30:32.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:32.756 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:32.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.757 --rc genhtml_branch_coverage=1 00:30:32.757 --rc genhtml_function_coverage=1 00:30:32.757 --rc genhtml_legend=1 00:30:32.757 --rc geninfo_all_blocks=1 00:30:32.757 --rc geninfo_unexecuted_blocks=1 00:30:32.757 00:30:32.757 ' 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:32.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.757 --rc genhtml_branch_coverage=1 00:30:32.757 --rc genhtml_function_coverage=1 00:30:32.757 --rc genhtml_legend=1 00:30:32.757 --rc geninfo_all_blocks=1 00:30:32.757 --rc geninfo_unexecuted_blocks=1 00:30:32.757 00:30:32.757 ' 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:32.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.757 --rc genhtml_branch_coverage=1 00:30:32.757 --rc genhtml_function_coverage=1 00:30:32.757 --rc genhtml_legend=1 00:30:32.757 --rc geninfo_all_blocks=1 00:30:32.757 --rc geninfo_unexecuted_blocks=1 00:30:32.757 00:30:32.757 ' 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:32.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.757 --rc genhtml_branch_coverage=1 00:30:32.757 --rc genhtml_function_coverage=1 00:30:32.757 --rc genhtml_legend=1 00:30:32.757 --rc geninfo_all_blocks=1 00:30:32.757 --rc geninfo_unexecuted_blocks=1 00:30:32.757 00:30:32.757 ' 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.757 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:32.758 13:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:38.031 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:38.031 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:38.031 Found net devices under 0000:86:00.0: cvl_0_0 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.031 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:38.032 Found net devices under 0000:86:00.1: cvl_0_1 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:38.032 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:38.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:30:38.291 00:30:38.291 --- 10.0.0.2 ping statistics --- 00:30:38.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.291 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:30:38.291 00:30:38.291 --- 10.0.0.1 ping statistics --- 00:30:38.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.291 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2176647 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2176647 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2176647 ']' 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.291 13:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:38.291 [2024-11-29 13:14:37.991107] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:38.291 [2024-11-29 13:14:37.992055] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:30:38.291 [2024-11-29 13:14:37.992090] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.291 [2024-11-29 13:14:38.058772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:38.291 [2024-11-29 13:14:38.101871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.291 [2024-11-29 13:14:38.101908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.291 [2024-11-29 13:14:38.101915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.291 [2024-11-29 13:14:38.101921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.291 [2024-11-29 13:14:38.101926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.291 [2024-11-29 13:14:38.103549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:38.291 [2024-11-29 13:14:38.103637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:38.291 [2024-11-29 13:14:38.103924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.291 [2024-11-29 13:14:38.103925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:38.550 [2024-11-29 13:14:38.171637] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:38.550 [2024-11-29 13:14:38.171806] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:38.550 [2024-11-29 13:14:38.172255] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:38.550 [2024-11-29 13:14:38.172274] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:38.550 [2024-11-29 13:14:38.172443] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:38.550 [2024-11-29 13:14:38.241647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:38.550 Malloc0 00:30:38.550 [2024-11-29 13:14:38.324566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:38.550 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2176882 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2176882 /var/tmp/bdevperf.sock 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2176882 ']' 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:38.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:38.809 { 00:30:38.809 "params": { 00:30:38.809 "name": "Nvme$subsystem", 00:30:38.809 "trtype": "$TEST_TRANSPORT", 00:30:38.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.809 "adrfam": "ipv4", 00:30:38.809 "trsvcid": "$NVMF_PORT", 00:30:38.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.809 "hdgst": ${hdgst:-false}, 00:30:38.809 "ddgst": ${ddgst:-false} 00:30:38.809 }, 00:30:38.809 "method": "bdev_nvme_attach_controller" 00:30:38.809 } 00:30:38.809 EOF 00:30:38.809 )") 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:38.809 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:38.809 "params": { 00:30:38.809 "name": "Nvme0", 00:30:38.809 "trtype": "tcp", 00:30:38.809 "traddr": "10.0.0.2", 00:30:38.809 "adrfam": "ipv4", 00:30:38.809 "trsvcid": "4420", 00:30:38.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:38.809 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:38.809 "hdgst": false, 00:30:38.809 "ddgst": false 00:30:38.809 }, 00:30:38.809 "method": "bdev_nvme_attach_controller" 00:30:38.809 }' 00:30:38.809 [2024-11-29 13:14:38.420349] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:30:38.809 [2024-11-29 13:14:38.420403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2176882 ] 00:30:38.809 [2024-11-29 13:14:38.482774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.809 [2024-11-29 13:14:38.524263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.068 Running I/O for 10 seconds... 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:30:39.068 13:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.329 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:39.329 [2024-11-29 13:14:39.076402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.329 [2024-11-29 13:14:39.076613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72cd70 is same with the state(6) to be set 00:30:39.330 [2024-11-29 13:14:39.076894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.076927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.076945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.076961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.076970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.076977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.076986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.076992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.330 [2024-11-29 13:14:39.077276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.330 [2024-11-29 13:14:39.077282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.331 [2024-11-29 13:14:39.077873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.331 [2024-11-29 13:14:39.077879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.332 [2024-11-29 13:14:39.077887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.332 [2024-11-29 13:14:39.077894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.332 [2024-11-29 13:14:39.077902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124d430 is same with the state(6) to be set 00:30:39.332 [2024-11-29 13:14:39.078885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:39.332 task offset: 90112 on job bdev=Nvme0n1 fails 00:30:39.332 00:30:39.332 Latency(us) 00:30:39.332 [2024-11-29T12:14:39.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.332 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:39.332 Job: Nvme0n1 ended in about 0.40 seconds with error 00:30:39.332 Verification LBA range: start 0x0 length 0x400 00:30:39.332 Nvme0n1 : 0.40 1774.26 110.89 161.30 0.00 32169.98 3818.18 27810.06 00:30:39.332 [2024-11-29T12:14:39.152Z] =================================================================================================================== 00:30:39.332 [2024-11-29T12:14:39.152Z] Total : 1774.26 110.89 161.30 0.00 32169.98 3818.18 27810.06 00:30:39.332 [2024-11-29 13:14:39.081311] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:39.332 [2024-11-29 13:14:39.081333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1034510 (9): 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.332 Bad file descriptor 00:30:39.332 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:39.332 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.332 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:39.332 [2024-11-29 13:14:39.082361] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:39.332 [2024-11-29 13:14:39.082433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:39.332 [2024-11-29 13:14:39.082455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.332 [2024-11-29 13:14:39.082471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:39.332 [2024-11-29 13:14:39.082479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:39.332 [2024-11-29 13:14:39.082487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.332 [2024-11-29 13:14:39.082494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1034510 00:30:39.332 [2024-11-29 13:14:39.082515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1034510 (9): Bad file descriptor 00:30:39.332 [2024-11-29 13:14:39.082526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:39.332 [2024-11-29 13:14:39.082535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:39.332 [2024-11-29 13:14:39.082543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:39.332 [2024-11-29 13:14:39.082551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:39.332 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.332 13:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:40.705 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2176882 00:30:40.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2176882) - No such process 00:30:40.705 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:40.705 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:40.705 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:40.705 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:40.705 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:40.705 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:40.705 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:40.705 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:40.705 { 00:30:40.705 "params": { 00:30:40.705 "name": "Nvme$subsystem", 00:30:40.705 "trtype": "$TEST_TRANSPORT", 00:30:40.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.705 "adrfam": "ipv4", 00:30:40.705 "trsvcid": "$NVMF_PORT", 00:30:40.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.705 "hdgst": ${hdgst:-false}, 00:30:40.705 "ddgst": ${ddgst:-false} 00:30:40.705 }, 00:30:40.705 "method": "bdev_nvme_attach_controller" 00:30:40.705 } 00:30:40.705 EOF 00:30:40.705 )") 00:30:40.705 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:40.705 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:40.705 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:40.705 13:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:40.705 "params": { 00:30:40.705 "name": "Nvme0", 00:30:40.705 "trtype": "tcp", 00:30:40.705 "traddr": "10.0.0.2", 00:30:40.705 "adrfam": "ipv4", 00:30:40.705 "trsvcid": "4420", 00:30:40.705 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.705 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:40.705 "hdgst": false, 00:30:40.705 "ddgst": false 00:30:40.705 }, 00:30:40.705 "method": "bdev_nvme_attach_controller" 00:30:40.705 }' 00:30:40.705 [2024-11-29 13:14:40.146875] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:30:40.705 [2024-11-29 13:14:40.146926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177154 ] 00:30:40.705 [2024-11-29 13:14:40.209637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.705 [2024-11-29 13:14:40.251520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.705 Running I/O for 1 seconds... 00:30:41.898 1920.00 IOPS, 120.00 MiB/s 00:30:41.898 Latency(us) 00:30:41.898 [2024-11-29T12:14:41.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.898 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:41.898 Verification LBA range: start 0x0 length 0x400 00:30:41.898 Nvme0n1 : 1.02 1953.42 122.09 0.00 0.00 32251.20 5299.87 27924.03 00:30:41.898 [2024-11-29T12:14:41.718Z] =================================================================================================================== 00:30:41.898 [2024-11-29T12:14:41.718Z] Total : 1953.42 122.09 0.00 0.00 32251.20 5299.87 27924.03 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:41.898 rmmod nvme_tcp 00:30:41.898 rmmod nvme_fabrics 00:30:41.898 rmmod nvme_keyring 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2176647 ']' 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2176647 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2176647 ']' 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2176647 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:41.898 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2176647 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2176647' 00:30:42.158 killing process with pid 2176647 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2176647 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2176647 00:30:42.158 [2024-11-29 13:14:41.913854] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.158 13:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.693 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:44.693 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:44.693 00:30:44.694 real 0m11.894s 00:30:44.694 user 0m17.211s 00:30:44.694 sys 0m6.016s 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:44.694 ************************************ 00:30:44.694 END TEST nvmf_host_management 00:30:44.694 ************************************ 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:44.694 ************************************ 00:30:44.694 START TEST nvmf_lvol 00:30:44.694 ************************************ 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:44.694 * Looking for test storage... 00:30:44.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:44.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.694 --rc genhtml_branch_coverage=1 00:30:44.694 --rc genhtml_function_coverage=1 00:30:44.694 --rc genhtml_legend=1 00:30:44.694 --rc geninfo_all_blocks=1 00:30:44.694 --rc geninfo_unexecuted_blocks=1 00:30:44.694 00:30:44.694 ' 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:44.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.694 --rc genhtml_branch_coverage=1 00:30:44.694 --rc genhtml_function_coverage=1 00:30:44.694 --rc genhtml_legend=1 00:30:44.694 --rc geninfo_all_blocks=1 00:30:44.694 --rc geninfo_unexecuted_blocks=1 00:30:44.694 00:30:44.694 ' 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:44.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.694 --rc genhtml_branch_coverage=1 00:30:44.694 --rc genhtml_function_coverage=1 00:30:44.694 --rc genhtml_legend=1 00:30:44.694 --rc geninfo_all_blocks=1 00:30:44.694 --rc geninfo_unexecuted_blocks=1 00:30:44.694 00:30:44.694 ' 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:44.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.694 --rc genhtml_branch_coverage=1 00:30:44.694 --rc genhtml_function_coverage=1 00:30:44.694 --rc genhtml_legend=1 00:30:44.694 --rc geninfo_all_blocks=1 00:30:44.694 --rc geninfo_unexecuted_blocks=1 00:30:44.694 00:30:44.694 ' 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:44.694 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:44.695 13:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:49.966 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:49.966 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.966 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:49.967 Found net devices under 0000:86:00.0: cvl_0_0 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:49.967 Found net devices under 0000:86:00.1: cvl_0_1 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:49.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:30:49.967 00:30:49.967 --- 10.0.0.2 ping statistics --- 00:30:49.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.967 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:30:49.967 00:30:49.967 --- 10.0.0.1 ping statistics --- 00:30:49.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.967 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:49.967 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.227 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:50.227 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:50.227 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.227 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:50.227 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2180856 00:30:50.227 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2180856 00:30:50.227 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:50.227 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2180856 ']' 00:30:50.227 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.227 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.227 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.227 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.227 13:14:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:50.227 [2024-11-29 13:14:49.860438] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:50.227 [2024-11-29 13:14:49.861442] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:30:50.227 [2024-11-29 13:14:49.861482] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.227 [2024-11-29 13:14:49.928559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:50.227 [2024-11-29 13:14:49.971821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.227 [2024-11-29 13:14:49.971856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.227 [2024-11-29 13:14:49.971862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.227 [2024-11-29 13:14:49.971868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.227 [2024-11-29 13:14:49.971874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.227 [2024-11-29 13:14:49.973261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.227 [2024-11-29 13:14:49.973359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.227 [2024-11-29 13:14:49.973359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.227 [2024-11-29 13:14:50.044084] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:50.227 [2024-11-29 13:14:50.044105] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:50.227 [2024-11-29 13:14:50.044178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:50.227 [2024-11-29 13:14:50.044305] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:50.486 13:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:50.486 13:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:50.486 13:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:50.486 13:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:50.486 13:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:50.486 13:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.486 13:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:50.486 [2024-11-29 13:14:50.273839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.486 13:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:50.745 13:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:50.745 13:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:51.003 13:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:51.003 13:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:51.261 13:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:51.531 13:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8bb2a964-630e-4d57-aae9-4ed80cb54249 00:30:51.531 13:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8bb2a964-630e-4d57-aae9-4ed80cb54249 lvol 20 00:30:51.531 13:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=dda6a678-5576-41ef-89b8-4d9b6bf49827 00:30:51.531 13:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:51.818 13:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dda6a678-5576-41ef-89b8-4d9b6bf49827 00:30:52.152 13:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:52.152 [2024-11-29 13:14:51.889940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.152 13:14:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:52.474 13:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2181181 00:30:52.474 13:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:52.474 13:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:53.437 13:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot dda6a678-5576-41ef-89b8-4d9b6bf49827 MY_SNAPSHOT 00:30:53.697 13:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0c8b223e-4a7a-4658-8734-d272064061b1 00:30:53.697 13:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize dda6a678-5576-41ef-89b8-4d9b6bf49827 30 00:30:53.956 13:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0c8b223e-4a7a-4658-8734-d272064061b1 MY_CLONE 00:30:54.215 13:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8a6ab635-58de-44d4-8dd8-0e4dc56fa13c 00:30:54.215 13:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8a6ab635-58de-44d4-8dd8-0e4dc56fa13c 00:30:54.782 13:14:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2181181 00:31:02.895 Initializing NVMe Controllers 00:31:02.895 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:02.895 Controller IO queue size 128, less than required. 00:31:02.896 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:02.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:02.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:02.896 Initialization complete. Launching workers. 00:31:02.896 ======================================================== 00:31:02.896 Latency(us) 00:31:02.896 Device Information : IOPS MiB/s Average min max 00:31:02.896 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11758.90 45.93 10886.66 1868.94 61232.32 00:31:02.896 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11894.40 46.46 10761.22 429.04 58277.79 00:31:02.896 ======================================================== 00:31:02.896 Total : 23653.30 92.40 10823.58 429.04 61232.32 00:31:02.896 00:31:02.896 13:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:02.896 13:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dda6a678-5576-41ef-89b8-4d9b6bf49827 00:31:03.155 13:15:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8bb2a964-630e-4d57-aae9-4ed80cb54249 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:03.415 rmmod nvme_tcp 00:31:03.415 rmmod nvme_fabrics 00:31:03.415 rmmod nvme_keyring 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2180856 ']' 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2180856 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2180856 ']' 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2180856 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2180856 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2180856' 00:31:03.415 killing process with pid 2180856 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2180856 00:31:03.415 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2180856 00:31:03.674 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:03.674 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:03.674 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:03.674 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:03.674 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:03.674 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:03.674 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:03.674 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:03.674 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:03.674 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.674 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.674 13:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:06.206 00:31:06.206 real 0m21.398s 00:31:06.206 user 0m55.454s 00:31:06.206 sys 0m9.524s 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:06.206 ************************************ 00:31:06.206 END TEST nvmf_lvol 00:31:06.206 ************************************ 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:06.206 ************************************ 00:31:06.206 START TEST nvmf_lvs_grow 00:31:06.206 ************************************ 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:06.206 * Looking for test storage... 00:31:06.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:06.206 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:06.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.207 --rc genhtml_branch_coverage=1 00:31:06.207 --rc genhtml_function_coverage=1 00:31:06.207 --rc genhtml_legend=1 00:31:06.207 --rc geninfo_all_blocks=1 00:31:06.207 --rc geninfo_unexecuted_blocks=1 00:31:06.207 00:31:06.207 ' 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:06.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.207 --rc genhtml_branch_coverage=1 00:31:06.207 --rc genhtml_function_coverage=1 00:31:06.207 --rc genhtml_legend=1 00:31:06.207 --rc geninfo_all_blocks=1 00:31:06.207 --rc geninfo_unexecuted_blocks=1 00:31:06.207 00:31:06.207 ' 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:06.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.207 --rc genhtml_branch_coverage=1 00:31:06.207 --rc genhtml_function_coverage=1 00:31:06.207 --rc genhtml_legend=1 00:31:06.207 --rc geninfo_all_blocks=1 00:31:06.207 --rc geninfo_unexecuted_blocks=1 00:31:06.207 00:31:06.207 ' 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:06.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.207 --rc genhtml_branch_coverage=1 00:31:06.207 --rc genhtml_function_coverage=1 00:31:06.207 --rc genhtml_legend=1 00:31:06.207 --rc geninfo_all_blocks=1 00:31:06.207 --rc geninfo_unexecuted_blocks=1 00:31:06.207 00:31:06.207 ' 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:06.207 13:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:11.476 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:11.477 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:11.477 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:11.477 Found net devices under 0000:86:00.0: cvl_0_0 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:11.477 Found net devices under 0000:86:00.1: cvl_0_1 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:11.477 13:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:11.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:31:11.477 00:31:11.477 --- 10.0.0.2 ping statistics --- 00:31:11.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.477 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:11.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:31:11.477 00:31:11.477 --- 10.0.0.1 ping statistics --- 00:31:11.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.477 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:11.477 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:11.478 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:11.478 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2187015 00:31:11.478 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:11.478 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2187015 00:31:11.478 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2187015 ']' 00:31:11.478 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.478 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:11.478 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.478 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:11.478 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:11.736 [2024-11-29 13:15:11.316018] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:11.736 [2024-11-29 13:15:11.316905] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:31:11.736 [2024-11-29 13:15:11.316937] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.736 [2024-11-29 13:15:11.383923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.736 [2024-11-29 13:15:11.426288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.737 [2024-11-29 13:15:11.426324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.737 [2024-11-29 13:15:11.426332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.737 [2024-11-29 13:15:11.426338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.737 [2024-11-29 13:15:11.426343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.737 [2024-11-29 13:15:11.426901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.737 [2024-11-29 13:15:11.495617] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:11.737 [2024-11-29 13:15:11.495839] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:11.737 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:11.737 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:31:11.737 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:11.737 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:11.737 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:11.996 [2024-11-29 13:15:11.731360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:11.996 ************************************ 00:31:11.996 START TEST lvs_grow_clean 00:31:11.996 ************************************ 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:11.996 13:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:12.254 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:12.254 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:12.511 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1 00:31:12.511 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1 00:31:12.511 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:12.769 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:12.769 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:12.769 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1 lvol 150 00:31:12.769 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=84db96e9-3640-4b3b-962e-eee6184a88da 00:31:12.769 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:12.769 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:13.027 [2024-11-29 13:15:12.759240] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:13.027 [2024-11-29 13:15:12.759320] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:13.027 true 00:31:13.027 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1 00:31:13.027 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:13.286 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:13.286 13:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:13.545 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 84db96e9-3640-4b3b-962e-eee6184a88da 00:31:13.804 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:13.804 [2024-11-29 13:15:13.555658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.804 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:14.063 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2187350 00:31:14.063 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:14.063 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:14.063 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2187350 /var/tmp/bdevperf.sock 00:31:14.063 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2187350 ']' 00:31:14.063 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:14.063 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.063 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:14.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:14.063 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.063 13:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:14.063 [2024-11-29 13:15:13.831062] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:31:14.063 [2024-11-29 13:15:13.831112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187350 ] 00:31:14.322 [2024-11-29 13:15:13.896464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.322 [2024-11-29 13:15:13.941493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.322 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.322 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:31:14.322 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:14.580 Nvme0n1 00:31:14.580 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:14.840 [ 00:31:14.840 { 00:31:14.840 "name": "Nvme0n1", 00:31:14.840 "aliases": [ 00:31:14.840 "84db96e9-3640-4b3b-962e-eee6184a88da" 00:31:14.840 ], 00:31:14.840 "product_name": "NVMe disk", 00:31:14.840 "block_size": 4096, 00:31:14.840 "num_blocks": 38912, 00:31:14.840 "uuid": "84db96e9-3640-4b3b-962e-eee6184a88da", 00:31:14.840 "numa_id": 1, 00:31:14.840 "assigned_rate_limits": { 00:31:14.840 "rw_ios_per_sec": 0, 00:31:14.840 "rw_mbytes_per_sec": 0, 00:31:14.840 "r_mbytes_per_sec": 0, 00:31:14.840 "w_mbytes_per_sec": 0 00:31:14.840 }, 00:31:14.840 "claimed": false, 00:31:14.840 "zoned": false, 00:31:14.840 "supported_io_types": { 00:31:14.840 "read": true, 00:31:14.840 "write": true, 00:31:14.840 "unmap": true, 00:31:14.840 "flush": true, 00:31:14.840 "reset": true, 00:31:14.840 "nvme_admin": true, 00:31:14.840 "nvme_io": true, 00:31:14.840 "nvme_io_md": false, 00:31:14.840 "write_zeroes": true, 00:31:14.840 "zcopy": false, 00:31:14.840 "get_zone_info": false, 00:31:14.840 "zone_management": false, 00:31:14.840 "zone_append": false, 00:31:14.840 "compare": true, 00:31:14.840 "compare_and_write": true, 00:31:14.840 "abort": true, 00:31:14.840 "seek_hole": false, 00:31:14.840 "seek_data": false, 00:31:14.840 "copy": true, 00:31:14.840 "nvme_iov_md": false 00:31:14.840 }, 00:31:14.840 "memory_domains": [ 00:31:14.840 { 00:31:14.840 "dma_device_id": "system", 00:31:14.840 "dma_device_type": 1 00:31:14.840 } 00:31:14.840 ], 00:31:14.840 "driver_specific": { 00:31:14.840 "nvme": [ 00:31:14.840 { 00:31:14.840 "trid": { 00:31:14.840 "trtype": "TCP", 00:31:14.840 "adrfam": "IPv4", 00:31:14.840 "traddr": "10.0.0.2", 00:31:14.840 "trsvcid": "4420", 00:31:14.840 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:14.840 }, 00:31:14.840 "ctrlr_data": { 00:31:14.840 "cntlid": 1, 00:31:14.840 "vendor_id": "0x8086", 00:31:14.840 "model_number": "SPDK bdev Controller", 00:31:14.840 "serial_number": "SPDK0", 00:31:14.840 "firmware_revision": "25.01", 00:31:14.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.840 "oacs": { 00:31:14.840 "security": 0, 00:31:14.840 "format": 0, 00:31:14.840 "firmware": 0, 00:31:14.840 "ns_manage": 0 00:31:14.840 }, 00:31:14.840 "multi_ctrlr": true, 00:31:14.840 "ana_reporting": false 00:31:14.840 }, 00:31:14.840 "vs": { 00:31:14.840 "nvme_version": "1.3" 00:31:14.840 }, 00:31:14.840 "ns_data": { 00:31:14.840 "id": 1, 00:31:14.840 "can_share": true 00:31:14.840 } 00:31:14.840 } 00:31:14.840 ], 00:31:14.840 "mp_policy": "active_passive" 00:31:14.840 } 00:31:14.840 } 00:31:14.840 ] 00:31:14.840 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:14.840 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2187570 00:31:14.840 13:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:14.840 Running I/O for 10 seconds... 00:31:16.222 Latency(us) 00:31:16.222 [2024-11-29T12:15:16.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:16.222 Nvme0n1 : 1.00 22543.00 88.06 0.00 0.00 0.00 0.00 0.00 00:31:16.222 [2024-11-29T12:15:16.042Z] =================================================================================================================== 00:31:16.222 [2024-11-29T12:15:16.042Z] Total : 22543.00 88.06 0.00 0.00 0.00 0.00 0.00 00:31:16.222 00:31:16.790 13:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1 00:31:16.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:16.790 Nvme0n1 : 2.00 22757.50 88.90 0.00 0.00 0.00 0.00 0.00 00:31:16.790 [2024-11-29T12:15:16.610Z] =================================================================================================================== 00:31:16.790 [2024-11-29T12:15:16.610Z] Total : 22757.50 88.90 0.00 0.00 0.00 0.00 0.00 00:31:16.790 00:31:17.048 true 00:31:17.048 13:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1 00:31:17.048 13:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:17.307 13:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:17.307 13:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:17.307 13:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2187570 00:31:17.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:17.874 Nvme0n1 : 3.00 22791.67 89.03 0.00 0.00 0.00 0.00 0.00 00:31:17.874 [2024-11-29T12:15:17.694Z] =================================================================================================================== 00:31:17.874 [2024-11-29T12:15:17.694Z] Total : 22791.67 89.03 0.00 0.00 0.00 0.00 0.00 00:31:17.874 00:31:18.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:18.810 Nvme0n1 : 4.00 22840.50 89.22 0.00 0.00 0.00 0.00 0.00 00:31:18.810 [2024-11-29T12:15:18.630Z] =================================================================================================================== 00:31:18.810 [2024-11-29T12:15:18.630Z] Total : 22840.50 89.22 0.00 0.00 0.00 0.00 0.00 00:31:18.810 00:31:20.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:20.185 Nvme0n1 : 5.00 22895.20 89.43 0.00 0.00 0.00 0.00 0.00 00:31:20.185 [2024-11-29T12:15:20.005Z] =================================================================================================================== 00:31:20.185 [2024-11-29T12:15:20.005Z] Total : 22895.20 89.43 0.00 0.00 0.00 0.00 0.00 00:31:20.185 00:31:21.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:21.122 Nvme0n1 : 6.00 22849.83 89.26 0.00 0.00 0.00 0.00 0.00 00:31:21.122 [2024-11-29T12:15:20.942Z] =================================================================================================================== 00:31:21.122 [2024-11-29T12:15:20.942Z] Total : 22849.83 89.26 0.00 0.00 0.00 0.00 0.00 00:31:21.122 00:31:22.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:22.059 Nvme0n1 : 7.00 22887.57 89.40 0.00 0.00 0.00 0.00 0.00 00:31:22.059 [2024-11-29T12:15:21.879Z] =================================================================================================================== 00:31:22.059 [2024-11-29T12:15:21.879Z] Total : 22887.57 89.40 0.00 0.00 0.00 0.00 0.00 00:31:22.059 00:31:22.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:22.994 Nvme0n1 : 8.00 22915.88 89.52 0.00 0.00 0.00 0.00 0.00 00:31:22.994 [2024-11-29T12:15:22.814Z] =================================================================================================================== 00:31:22.994 [2024-11-29T12:15:22.814Z] Total : 22915.88 89.52 0.00 0.00 0.00 0.00 0.00 00:31:22.994 00:31:23.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:23.931 Nvme0n1 : 9.00 22952.00 89.66 0.00 0.00 0.00 0.00 0.00 00:31:23.931 [2024-11-29T12:15:23.751Z] =================================================================================================================== 00:31:23.931 [2024-11-29T12:15:23.751Z] Total : 22952.00 89.66 0.00 0.00 0.00 0.00 0.00 00:31:23.931 00:31:24.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:24.888 Nvme0n1 : 10.00 22968.20 89.72 0.00 0.00 0.00 0.00 0.00 00:31:24.888 [2024-11-29T12:15:24.708Z] =================================================================================================================== 00:31:24.888 [2024-11-29T12:15:24.708Z] Total : 22968.20 89.72 0.00 0.00 0.00 0.00 0.00 00:31:24.888 00:31:24.888 00:31:24.888 Latency(us) 00:31:24.888 [2024-11-29T12:15:24.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:24.888 Nvme0n1 : 10.00 22970.00 89.73 0.00 0.00 5569.30 3333.79 14816.83 00:31:24.888 [2024-11-29T12:15:24.708Z] =================================================================================================================== 00:31:24.888 [2024-11-29T12:15:24.708Z] Total : 22970.00 89.73 0.00 0.00 5569.30 3333.79 14816.83 00:31:24.888 { 00:31:24.888 "results": [ 00:31:24.888 { 00:31:24.888 "job": "Nvme0n1", 00:31:24.888 "core_mask": "0x2", 00:31:24.888 "workload": "randwrite", 00:31:24.888 "status": "finished", 00:31:24.888 "queue_depth": 128, 00:31:24.888 "io_size": 4096, 00:31:24.888 "runtime": 10.00479, 00:31:24.888 "iops": 22969.997371259167, 00:31:24.888 "mibps": 89.72655223148112, 00:31:24.888 "io_failed": 0, 00:31:24.888 "io_timeout": 0, 00:31:24.888 "avg_latency_us": 5569.301635596892, 00:31:24.888 "min_latency_us": 3333.7878260869566, 00:31:24.888 "max_latency_us": 14816.834782608696 00:31:24.888 } 00:31:24.888 ], 00:31:24.888 "core_count": 1 00:31:24.889 } 00:31:24.889 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2187350 00:31:24.889 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2187350 ']' 00:31:24.889 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2187350 00:31:24.889 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:31:24.889 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:24.889 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2187350 00:31:25.148 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:25.148 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:25.148 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2187350' 00:31:25.148 killing process with pid 2187350 00:31:25.148 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2187350 00:31:25.148 Received shutdown signal, test time was about 10.000000 seconds 00:31:25.148 00:31:25.148 Latency(us) 00:31:25.148 [2024-11-29T12:15:24.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.148 [2024-11-29T12:15:24.968Z] =================================================================================================================== 00:31:25.148 [2024-11-29T12:15:24.968Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:25.148 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2187350 00:31:25.148 13:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:25.407 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:25.664 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1 00:31:25.664 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:25.664 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:25.664 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:25.664 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:25.920 [2024-11-29 13:15:25.619370] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:25.920 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1 00:31:25.920 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:31:25.920 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1 00:31:25.920 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:25.920 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:25.920 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:25.920 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:25.920 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:25.920 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:25.920 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:25.920 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:25.920 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1 00:31:26.177 request: 00:31:26.177 { 00:31:26.177 "uuid": "cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1", 00:31:26.177 "method": "bdev_lvol_get_lvstores", 00:31:26.177 "req_id": 1 00:31:26.177 } 00:31:26.177 Got JSON-RPC error response 00:31:26.177 response: 00:31:26.177 { 00:31:26.177 "code": -19, 00:31:26.177 "message": "No such device" 00:31:26.177 } 00:31:26.177 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:31:26.177 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:26.177 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:26.177 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:26.177 13:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:26.436 aio_bdev 00:31:26.436 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 84db96e9-3640-4b3b-962e-eee6184a88da 00:31:26.436 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=84db96e9-3640-4b3b-962e-eee6184a88da 00:31:26.436 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:26.436 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:31:26.436 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:26.436 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:26.436 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:26.436 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 84db96e9-3640-4b3b-962e-eee6184a88da -t 2000 00:31:26.694 [ 00:31:26.694 { 00:31:26.694 "name": "84db96e9-3640-4b3b-962e-eee6184a88da", 00:31:26.694 "aliases": [ 00:31:26.694 "lvs/lvol" 00:31:26.694 ], 00:31:26.694 "product_name": "Logical Volume", 00:31:26.694 "block_size": 4096, 00:31:26.694 "num_blocks": 38912, 00:31:26.694 "uuid": "84db96e9-3640-4b3b-962e-eee6184a88da", 00:31:26.694 "assigned_rate_limits": { 00:31:26.694 "rw_ios_per_sec": 0, 00:31:26.694 "rw_mbytes_per_sec": 0, 00:31:26.694 "r_mbytes_per_sec": 0, 00:31:26.694 "w_mbytes_per_sec": 0 00:31:26.694 }, 00:31:26.694 "claimed": false, 00:31:26.694 "zoned": false, 00:31:26.694 "supported_io_types": { 00:31:26.694 "read": true, 00:31:26.694 "write": true, 00:31:26.694 "unmap": true, 00:31:26.694 "flush": false, 00:31:26.694 "reset": true, 00:31:26.694 "nvme_admin": false, 00:31:26.694 "nvme_io": false, 00:31:26.694 "nvme_io_md": false, 00:31:26.694 "write_zeroes": true, 00:31:26.694 "zcopy": false, 00:31:26.694 "get_zone_info": false, 00:31:26.694 "zone_management": false, 00:31:26.694 "zone_append": false, 00:31:26.694 "compare": false, 00:31:26.694 "compare_and_write": false, 00:31:26.694 "abort": false, 00:31:26.694 "seek_hole": true, 00:31:26.694 "seek_data": true, 00:31:26.694 "copy": false, 00:31:26.694 "nvme_iov_md": false 00:31:26.694 }, 00:31:26.694 "driver_specific": { 00:31:26.694 "lvol": { 00:31:26.694 "lvol_store_uuid": "cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1", 00:31:26.694 "base_bdev": "aio_bdev", 00:31:26.694 "thin_provision": false, 00:31:26.694 "num_allocated_clusters": 38, 00:31:26.694 "snapshot": false, 00:31:26.694 "clone": false, 00:31:26.694 "esnap_clone": false 00:31:26.694 } 00:31:26.694 } 00:31:26.694 } 00:31:26.694 ] 00:31:26.694 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:31:26.694 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1 00:31:26.694 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:26.954 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:26.954 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1 00:31:26.954 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:27.212 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:27.212 13:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 84db96e9-3640-4b3b-962e-eee6184a88da 00:31:27.212 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cf6d4a4b-2d3a-413c-981c-5c792dd7f5b1 00:31:27.471 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:27.729 00:31:27.729 real 0m15.686s 00:31:27.729 user 0m15.363s 00:31:27.729 sys 0m1.389s 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:27.729 ************************************ 00:31:27.729 END TEST lvs_grow_clean 00:31:27.729 ************************************ 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:27.729 ************************************ 00:31:27.729 START TEST lvs_grow_dirty 00:31:27.729 ************************************ 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:27.729 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:27.988 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:27.988 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:28.246 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:28.246 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:28.246 13:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:28.504 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:28.504 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:28.504 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b07d078c-a8b0-4439-ba80-8c34566ea337 lvol 150 00:31:28.761 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=75eda24a-f7fe-445e-8197-2d9317ae8999 00:31:28.761 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:28.761 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:28.761 [2024-11-29 13:15:28.547242] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:28.761 [2024-11-29 13:15:28.547324] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:28.761 true 00:31:28.761 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:28.761 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:29.019 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:29.019 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:29.277 13:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 75eda24a-f7fe-445e-8197-2d9317ae8999 00:31:29.534 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:29.534 [2024-11-29 13:15:29.299676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.534 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:29.793 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2189929 00:31:29.793 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:29.793 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2189929 /var/tmp/bdevperf.sock 00:31:29.793 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2189929 ']' 00:31:29.793 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:29.793 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:29.793 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:29.793 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:29.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:29.793 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:29.793 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:29.793 [2024-11-29 13:15:29.571754] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:31:29.793 [2024-11-29 13:15:29.571803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189929 ] 00:31:30.052 [2024-11-29 13:15:29.634419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.052 [2024-11-29 13:15:29.677676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.052 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:30.052 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:30.052 13:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:30.617 Nvme0n1 00:31:30.617 13:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:30.617 [ 00:31:30.617 { 00:31:30.617 "name": "Nvme0n1", 00:31:30.617 "aliases": [ 00:31:30.617 "75eda24a-f7fe-445e-8197-2d9317ae8999" 00:31:30.617 ], 00:31:30.617 "product_name": "NVMe disk", 00:31:30.617 "block_size": 4096, 00:31:30.617 "num_blocks": 38912, 00:31:30.617 "uuid": "75eda24a-f7fe-445e-8197-2d9317ae8999", 00:31:30.617 "numa_id": 1, 00:31:30.617 "assigned_rate_limits": { 00:31:30.617 "rw_ios_per_sec": 0, 00:31:30.617 "rw_mbytes_per_sec": 0, 00:31:30.617 "r_mbytes_per_sec": 0, 00:31:30.617 "w_mbytes_per_sec": 0 00:31:30.617 }, 00:31:30.617 "claimed": false, 00:31:30.617 "zoned": false, 00:31:30.617 "supported_io_types": { 00:31:30.617 "read": true, 00:31:30.617 "write": true, 00:31:30.617 "unmap": true, 00:31:30.617 "flush": true, 00:31:30.617 "reset": true, 00:31:30.617 "nvme_admin": true, 00:31:30.617 "nvme_io": true, 00:31:30.617 "nvme_io_md": false, 00:31:30.617 "write_zeroes": true, 00:31:30.617 "zcopy": false, 00:31:30.617 "get_zone_info": false, 00:31:30.617 "zone_management": false, 00:31:30.617 "zone_append": false, 00:31:30.617 "compare": true, 00:31:30.617 "compare_and_write": true, 00:31:30.617 "abort": true, 00:31:30.617 "seek_hole": false, 00:31:30.617 "seek_data": false, 00:31:30.617 "copy": true, 00:31:30.617 "nvme_iov_md": false 00:31:30.617 }, 00:31:30.617 "memory_domains": [ 00:31:30.617 { 00:31:30.617 "dma_device_id": "system", 00:31:30.617 "dma_device_type": 1 00:31:30.617 } 00:31:30.617 ], 00:31:30.617 "driver_specific": { 00:31:30.617 "nvme": [ 00:31:30.617 { 00:31:30.617 "trid": { 00:31:30.617 "trtype": "TCP", 00:31:30.617 "adrfam": "IPv4", 00:31:30.617 "traddr": "10.0.0.2", 00:31:30.617 "trsvcid": "4420", 00:31:30.617 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:30.617 }, 00:31:30.617 "ctrlr_data": { 00:31:30.617 "cntlid": 1, 00:31:30.617 "vendor_id": "0x8086", 00:31:30.617 "model_number": "SPDK bdev Controller", 00:31:30.617 "serial_number": "SPDK0", 00:31:30.617 "firmware_revision": "25.01", 00:31:30.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.617 "oacs": { 00:31:30.617 "security": 0, 00:31:30.617 "format": 0, 00:31:30.617 "firmware": 0, 00:31:30.617 "ns_manage": 0 00:31:30.617 }, 00:31:30.617 "multi_ctrlr": true, 00:31:30.617 "ana_reporting": false 00:31:30.617 }, 00:31:30.617 "vs": { 00:31:30.617 "nvme_version": "1.3" 00:31:30.617 }, 00:31:30.617 "ns_data": { 00:31:30.617 "id": 1, 00:31:30.617 "can_share": true 00:31:30.617 } 00:31:30.617 } 00:31:30.617 ], 00:31:30.617 "mp_policy": "active_passive" 00:31:30.617 } 00:31:30.617 } 00:31:30.617 ] 00:31:30.617 13:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2190157 00:31:30.617 13:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:30.617 13:15:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:30.875 Running I/O for 10 seconds... 00:31:31.809 Latency(us) 00:31:31.809 [2024-11-29T12:15:31.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:31.809 Nvme0n1 : 1.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:31:31.809 [2024-11-29T12:15:31.629Z] =================================================================================================================== 00:31:31.809 [2024-11-29T12:15:31.629Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:31:31.809 00:31:32.743 13:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:32.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:32.743 Nvme0n1 : 2.00 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:31:32.743 [2024-11-29T12:15:32.563Z] =================================================================================================================== 00:31:32.743 [2024-11-29T12:15:32.563Z] Total : 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:31:32.743 00:31:32.743 true 00:31:32.743 13:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:32.743 13:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:33.001 13:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:33.001 13:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:33.001 13:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2190157 00:31:33.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:33.936 Nvme0n1 : 3.00 22690.67 88.64 0.00 0.00 0.00 0.00 0.00 00:31:33.936 [2024-11-29T12:15:33.756Z] =================================================================================================================== 00:31:33.936 [2024-11-29T12:15:33.756Z] Total : 22690.67 88.64 0.00 0.00 0.00 0.00 0.00 00:31:33.936 00:31:34.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:34.870 Nvme0n1 : 4.00 22796.50 89.05 0.00 0.00 0.00 0.00 0.00 00:31:34.870 [2024-11-29T12:15:34.690Z] =================================================================================================================== 00:31:34.870 [2024-11-29T12:15:34.690Z] Total : 22796.50 89.05 0.00 0.00 0.00 0.00 0.00 00:31:34.870 00:31:35.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:35.806 Nvme0n1 : 5.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:31:35.806 [2024-11-29T12:15:35.626Z] =================================================================================================================== 00:31:35.806 [2024-11-29T12:15:35.626Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:31:35.806 00:31:36.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:36.740 Nvme0n1 : 6.00 22902.33 89.46 0.00 0.00 0.00 0.00 0.00 00:31:36.740 [2024-11-29T12:15:36.560Z] =================================================================================================================== 00:31:36.740 [2024-11-29T12:15:36.560Z] Total : 22902.33 89.46 0.00 0.00 0.00 0.00 0.00 00:31:36.740 00:31:37.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:37.673 Nvme0n1 : 7.00 22932.57 89.58 0.00 0.00 0.00 0.00 0.00 00:31:37.673 [2024-11-29T12:15:37.493Z] =================================================================================================================== 00:31:37.673 [2024-11-29T12:15:37.493Z] Total : 22932.57 89.58 0.00 0.00 0.00 0.00 0.00 00:31:37.673 00:31:39.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:39.048 Nvme0n1 : 8.00 22971.12 89.73 0.00 0.00 0.00 0.00 0.00 00:31:39.048 [2024-11-29T12:15:38.868Z] =================================================================================================================== 00:31:39.048 [2024-11-29T12:15:38.868Z] Total : 22971.12 89.73 0.00 0.00 0.00 0.00 0.00 00:31:39.048 00:31:39.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:39.982 Nvme0n1 : 9.00 22994.11 89.82 0.00 0.00 0.00 0.00 0.00 00:31:39.982 [2024-11-29T12:15:39.802Z] =================================================================================================================== 00:31:39.982 [2024-11-29T12:15:39.802Z] Total : 22994.11 89.82 0.00 0.00 0.00 0.00 0.00 00:31:39.982 00:31:40.917 00:31:40.917 Latency(us) 00:31:40.917 [2024-11-29T12:15:40.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:40.917 Nvme0n1 : 10.00 22987.42 89.79 0.00 0.00 5565.20 3205.57 14930.81 00:31:40.917 [2024-11-29T12:15:40.737Z] =================================================================================================================== 00:31:40.917 [2024-11-29T12:15:40.737Z] Total : 22987.42 89.79 0.00 0.00 5565.20 3205.57 14930.81 00:31:40.917 { 00:31:40.917 "results": [ 00:31:40.917 { 00:31:40.917 "job": "Nvme0n1", 00:31:40.917 "core_mask": "0x2", 00:31:40.917 "workload": "randwrite", 00:31:40.917 "status": "finished", 00:31:40.917 "queue_depth": 128, 00:31:40.917 "io_size": 4096, 00:31:40.917 "runtime": 10.001994, 00:31:40.917 "iops": 22987.41630918795, 00:31:40.917 "mibps": 89.79459495776543, 00:31:40.917 "io_failed": 0, 00:31:40.917 "io_timeout": 0, 00:31:40.917 "avg_latency_us": 5565.199607394632, 00:31:40.917 "min_latency_us": 3205.5652173913045, 00:31:40.917 "max_latency_us": 14930.810434782608 00:31:40.917 } 00:31:40.917 ], 00:31:40.917 "core_count": 1 00:31:40.917 } 00:31:40.917 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2189929 00:31:40.917 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2189929 ']' 00:31:40.917 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2189929 00:31:40.917 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:40.917 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:40.917 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2189929 00:31:40.917 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:40.917 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:40.917 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2189929' 00:31:40.917 killing process with pid 2189929 00:31:40.917 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2189929 00:31:40.917 Received shutdown signal, test time was about 10.000000 seconds 00:31:40.917 00:31:40.917 Latency(us) 00:31:40.917 [2024-11-29T12:15:40.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.918 [2024-11-29T12:15:40.738Z] =================================================================================================================== 00:31:40.918 [2024-11-29T12:15:40.738Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:40.918 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2189929 00:31:40.918 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:41.177 13:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:41.435 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:41.435 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2187015 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2187015 00:31:41.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2187015 Killed "${NVMF_APP[@]}" "$@" 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2191905 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2191905 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2191905 ']' 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:41.694 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:41.694 [2024-11-29 13:15:41.395293] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:41.694 [2024-11-29 13:15:41.396176] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:31:41.694 [2024-11-29 13:15:41.396211] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:41.694 [2024-11-29 13:15:41.463342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.694 [2024-11-29 13:15:41.504709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:41.694 [2024-11-29 13:15:41.504743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:41.694 [2024-11-29 13:15:41.504750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:41.694 [2024-11-29 13:15:41.504756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:41.694 [2024-11-29 13:15:41.504761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:41.694 [2024-11-29 13:15:41.505310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.953 [2024-11-29 13:15:41.574161] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:41.953 [2024-11-29 13:15:41.574375] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:41.953 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:41.953 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:41.953 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:41.953 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:41.953 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:41.953 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:41.953 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:42.212 [2024-11-29 13:15:41.812370] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:42.212 [2024-11-29 13:15:41.812476] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:42.212 [2024-11-29 13:15:41.812513] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:42.212 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:42.212 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 75eda24a-f7fe-445e-8197-2d9317ae8999 00:31:42.213 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=75eda24a-f7fe-445e-8197-2d9317ae8999 00:31:42.213 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:42.213 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:42.213 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:42.213 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:42.213 13:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:42.471 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 75eda24a-f7fe-445e-8197-2d9317ae8999 -t 2000 00:31:42.471 [ 00:31:42.471 { 00:31:42.471 "name": "75eda24a-f7fe-445e-8197-2d9317ae8999", 00:31:42.471 "aliases": [ 00:31:42.471 "lvs/lvol" 00:31:42.471 ], 00:31:42.471 "product_name": "Logical Volume", 00:31:42.471 "block_size": 4096, 00:31:42.471 "num_blocks": 38912, 00:31:42.471 "uuid": "75eda24a-f7fe-445e-8197-2d9317ae8999", 00:31:42.471 "assigned_rate_limits": { 00:31:42.471 "rw_ios_per_sec": 0, 00:31:42.471 "rw_mbytes_per_sec": 0, 00:31:42.471 "r_mbytes_per_sec": 0, 00:31:42.471 "w_mbytes_per_sec": 0 00:31:42.471 }, 00:31:42.471 "claimed": false, 00:31:42.471 "zoned": false, 00:31:42.471 "supported_io_types": { 00:31:42.472 "read": true, 00:31:42.472 "write": true, 00:31:42.472 "unmap": true, 00:31:42.472 "flush": false, 00:31:42.472 "reset": true, 00:31:42.472 "nvme_admin": false, 00:31:42.472 "nvme_io": false, 00:31:42.472 "nvme_io_md": false, 00:31:42.472 "write_zeroes": true, 00:31:42.472 "zcopy": false, 00:31:42.472 "get_zone_info": false, 00:31:42.472 "zone_management": false, 00:31:42.472 "zone_append": false, 00:31:42.472 "compare": false, 00:31:42.472 "compare_and_write": false, 00:31:42.472 "abort": false, 00:31:42.472 "seek_hole": true, 00:31:42.472 "seek_data": true, 00:31:42.472 "copy": false, 00:31:42.472 "nvme_iov_md": false 00:31:42.472 }, 00:31:42.472 "driver_specific": { 00:31:42.472 "lvol": { 00:31:42.472 "lvol_store_uuid": "b07d078c-a8b0-4439-ba80-8c34566ea337", 00:31:42.472 "base_bdev": "aio_bdev", 00:31:42.472 "thin_provision": false, 00:31:42.472 "num_allocated_clusters": 38, 00:31:42.472 "snapshot": false, 00:31:42.472 "clone": false, 00:31:42.472 "esnap_clone": false 00:31:42.472 } 00:31:42.472 } 00:31:42.472 } 00:31:42.472 ] 00:31:42.472 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:42.472 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:42.472 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:42.731 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:42.731 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:42.731 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:42.990 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:42.990 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:42.990 [2024-11-29 13:15:42.797774] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:43.250 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:43.250 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:43.250 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:43.250 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:43.250 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:43.250 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:43.250 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:43.250 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:43.250 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:43.250 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:43.250 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:43.250 13:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:43.250 request: 00:31:43.250 { 00:31:43.250 "uuid": "b07d078c-a8b0-4439-ba80-8c34566ea337", 00:31:43.250 "method": "bdev_lvol_get_lvstores", 00:31:43.250 "req_id": 1 00:31:43.250 } 00:31:43.250 Got JSON-RPC error response 00:31:43.250 response: 00:31:43.250 { 00:31:43.250 "code": -19, 00:31:43.250 "message": "No such device" 00:31:43.250 } 00:31:43.250 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:43.250 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:43.250 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:43.250 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:43.250 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:43.509 aio_bdev 00:31:43.509 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 75eda24a-f7fe-445e-8197-2d9317ae8999 00:31:43.509 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=75eda24a-f7fe-445e-8197-2d9317ae8999 00:31:43.509 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:43.509 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:43.509 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:43.509 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:43.509 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:43.768 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 75eda24a-f7fe-445e-8197-2d9317ae8999 -t 2000 00:31:44.027 [ 00:31:44.027 { 00:31:44.027 "name": "75eda24a-f7fe-445e-8197-2d9317ae8999", 00:31:44.027 "aliases": [ 00:31:44.027 "lvs/lvol" 00:31:44.027 ], 00:31:44.027 "product_name": "Logical Volume", 00:31:44.027 "block_size": 4096, 00:31:44.027 "num_blocks": 38912, 00:31:44.027 "uuid": "75eda24a-f7fe-445e-8197-2d9317ae8999", 00:31:44.027 "assigned_rate_limits": { 00:31:44.027 "rw_ios_per_sec": 0, 00:31:44.027 "rw_mbytes_per_sec": 0, 00:31:44.027 "r_mbytes_per_sec": 0, 00:31:44.027 "w_mbytes_per_sec": 0 00:31:44.028 }, 00:31:44.028 "claimed": false, 00:31:44.028 "zoned": false, 00:31:44.028 "supported_io_types": { 00:31:44.028 "read": true, 00:31:44.028 "write": true, 00:31:44.028 "unmap": true, 00:31:44.028 "flush": false, 00:31:44.028 "reset": true, 00:31:44.028 "nvme_admin": false, 00:31:44.028 "nvme_io": false, 00:31:44.028 "nvme_io_md": false, 00:31:44.028 "write_zeroes": true, 00:31:44.028 "zcopy": false, 00:31:44.028 "get_zone_info": false, 00:31:44.028 "zone_management": false, 00:31:44.028 "zone_append": false, 00:31:44.028 "compare": false, 00:31:44.028 "compare_and_write": false, 00:31:44.028 "abort": false, 00:31:44.028 "seek_hole": true, 00:31:44.028 "seek_data": true, 00:31:44.028 "copy": false, 00:31:44.028 "nvme_iov_md": false 00:31:44.028 }, 00:31:44.028 "driver_specific": { 00:31:44.028 "lvol": { 00:31:44.028 "lvol_store_uuid": "b07d078c-a8b0-4439-ba80-8c34566ea337", 00:31:44.028 "base_bdev": "aio_bdev", 00:31:44.028 "thin_provision": false, 00:31:44.028 "num_allocated_clusters": 38, 00:31:44.028 "snapshot": false, 00:31:44.028 "clone": false, 00:31:44.028 "esnap_clone": false 00:31:44.028 } 00:31:44.028 } 00:31:44.028 } 00:31:44.028 ] 00:31:44.028 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:44.028 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:44.028 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:44.028 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:44.028 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:44.028 13:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:44.287 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:44.287 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 75eda24a-f7fe-445e-8197-2d9317ae8999 00:31:44.546 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b07d078c-a8b0-4439-ba80-8c34566ea337 00:31:44.804 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:45.064 00:31:45.064 real 0m17.146s 00:31:45.064 user 0m34.701s 00:31:45.064 sys 0m3.642s 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:45.064 ************************************ 00:31:45.064 END TEST lvs_grow_dirty 00:31:45.064 ************************************ 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:45.064 nvmf_trace.0 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.064 rmmod nvme_tcp 00:31:45.064 rmmod nvme_fabrics 00:31:45.064 rmmod nvme_keyring 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2191905 ']' 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2191905 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2191905 ']' 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2191905 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2191905 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2191905' 00:31:45.064 killing process with pid 2191905 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2191905 00:31:45.064 13:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2191905 00:31:45.323 13:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:45.323 13:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:45.323 13:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:45.323 13:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:45.323 13:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:45.323 13:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:45.323 13:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:45.323 13:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:45.323 13:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:45.323 13:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.323 13:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.323 13:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:47.854 00:31:47.854 real 0m41.553s 00:31:47.854 user 0m52.411s 00:31:47.854 sys 0m9.589s 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:47.854 ************************************ 00:31:47.854 END TEST nvmf_lvs_grow 00:31:47.854 ************************************ 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:47.854 ************************************ 00:31:47.854 START TEST nvmf_bdev_io_wait 00:31:47.854 ************************************ 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:47.854 * Looking for test storage... 00:31:47.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.854 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:47.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.855 --rc genhtml_branch_coverage=1 00:31:47.855 --rc genhtml_function_coverage=1 00:31:47.855 --rc genhtml_legend=1 00:31:47.855 --rc geninfo_all_blocks=1 00:31:47.855 --rc geninfo_unexecuted_blocks=1 00:31:47.855 00:31:47.855 ' 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:47.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.855 --rc genhtml_branch_coverage=1 00:31:47.855 --rc genhtml_function_coverage=1 00:31:47.855 --rc genhtml_legend=1 00:31:47.855 --rc geninfo_all_blocks=1 00:31:47.855 --rc geninfo_unexecuted_blocks=1 00:31:47.855 00:31:47.855 ' 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:47.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.855 --rc genhtml_branch_coverage=1 00:31:47.855 --rc genhtml_function_coverage=1 00:31:47.855 --rc genhtml_legend=1 00:31:47.855 --rc geninfo_all_blocks=1 00:31:47.855 --rc geninfo_unexecuted_blocks=1 00:31:47.855 00:31:47.855 ' 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:47.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.855 --rc genhtml_branch_coverage=1 00:31:47.855 --rc genhtml_function_coverage=1 00:31:47.855 --rc genhtml_legend=1 00:31:47.855 --rc geninfo_all_blocks=1 00:31:47.855 --rc geninfo_unexecuted_blocks=1 00:31:47.855 00:31:47.855 ' 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:47.855 13:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:53.120 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:53.120 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:53.120 Found net devices under 0000:86:00.0: cvl_0_0 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:53.120 Found net devices under 0000:86:00.1: cvl_0_1 00:31:53.120 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:53.121 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:53.379 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:53.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:53.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:31:53.379 00:31:53.379 --- 10.0.0.2 ping statistics --- 00:31:53.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.379 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:31:53.379 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:53.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:53.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:31:53.379 00:31:53.379 --- 10.0.0.1 ping statistics --- 00:31:53.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.379 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:31:53.379 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:53.379 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:53.379 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:53.379 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:53.379 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:53.379 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:53.379 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2196030 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2196030 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2196030 ']' 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:53.380 13:15:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:53.380 [2024-11-29 13:15:53.040773] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:53.380 [2024-11-29 13:15:53.041705] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:31:53.380 [2024-11-29 13:15:53.041738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.380 [2024-11-29 13:15:53.108019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:53.380 [2024-11-29 13:15:53.152002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:53.380 [2024-11-29 13:15:53.152040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:53.380 [2024-11-29 13:15:53.152048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:53.380 [2024-11-29 13:15:53.152053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:53.380 [2024-11-29 13:15:53.152058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:53.380 [2024-11-29 13:15:53.153659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.380 [2024-11-29 13:15:53.153754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:53.380 [2024-11-29 13:15:53.153934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:53.380 [2024-11-29 13:15:53.153936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.380 [2024-11-29 13:15:53.154234] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:53.380 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:53.380 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:53.380 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:53.380 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:53.380 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:53.638 [2024-11-29 13:15:53.281014] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:53.638 [2024-11-29 13:15:53.281095] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:53.638 [2024-11-29 13:15:53.281640] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:53.638 [2024-11-29 13:15:53.282107] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:53.638 [2024-11-29 13:15:53.290610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:53.638 Malloc0 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:53.638 [2024-11-29 13:15:53.346573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2196052 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:53.638 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2196054 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:53.639 { 00:31:53.639 "params": { 00:31:53.639 "name": "Nvme$subsystem", 00:31:53.639 "trtype": "$TEST_TRANSPORT", 00:31:53.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.639 "adrfam": "ipv4", 00:31:53.639 "trsvcid": "$NVMF_PORT", 00:31:53.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.639 "hdgst": ${hdgst:-false}, 00:31:53.639 "ddgst": ${ddgst:-false} 00:31:53.639 }, 00:31:53.639 "method": "bdev_nvme_attach_controller" 00:31:53.639 } 00:31:53.639 EOF 00:31:53.639 )") 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2196056 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:53.639 { 00:31:53.639 "params": { 00:31:53.639 "name": "Nvme$subsystem", 00:31:53.639 "trtype": "$TEST_TRANSPORT", 00:31:53.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.639 "adrfam": "ipv4", 00:31:53.639 "trsvcid": "$NVMF_PORT", 00:31:53.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.639 "hdgst": ${hdgst:-false}, 00:31:53.639 "ddgst": ${ddgst:-false} 00:31:53.639 }, 00:31:53.639 "method": "bdev_nvme_attach_controller" 00:31:53.639 } 00:31:53.639 EOF 00:31:53.639 )") 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2196059 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:53.639 { 00:31:53.639 "params": { 00:31:53.639 "name": "Nvme$subsystem", 00:31:53.639 "trtype": "$TEST_TRANSPORT", 00:31:53.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.639 "adrfam": "ipv4", 00:31:53.639 "trsvcid": "$NVMF_PORT", 00:31:53.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.639 "hdgst": ${hdgst:-false}, 00:31:53.639 "ddgst": ${ddgst:-false} 00:31:53.639 }, 00:31:53.639 "method": "bdev_nvme_attach_controller" 00:31:53.639 } 00:31:53.639 EOF 00:31:53.639 )") 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:53.639 { 00:31:53.639 "params": { 00:31:53.639 "name": "Nvme$subsystem", 00:31:53.639 "trtype": "$TEST_TRANSPORT", 00:31:53.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.639 "adrfam": "ipv4", 00:31:53.639 "trsvcid": "$NVMF_PORT", 00:31:53.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.639 "hdgst": ${hdgst:-false}, 00:31:53.639 "ddgst": ${ddgst:-false} 00:31:53.639 }, 00:31:53.639 "method": "bdev_nvme_attach_controller" 00:31:53.639 } 00:31:53.639 EOF 00:31:53.639 )") 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2196052 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:53.639 "params": { 00:31:53.639 "name": "Nvme1", 00:31:53.639 "trtype": "tcp", 00:31:53.639 "traddr": "10.0.0.2", 00:31:53.639 "adrfam": "ipv4", 00:31:53.639 "trsvcid": "4420", 00:31:53.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:53.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:53.639 "hdgst": false, 00:31:53.639 "ddgst": false 00:31:53.639 }, 00:31:53.639 "method": "bdev_nvme_attach_controller" 00:31:53.639 }' 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:53.639 "params": { 00:31:53.639 "name": "Nvme1", 00:31:53.639 "trtype": "tcp", 00:31:53.639 "traddr": "10.0.0.2", 00:31:53.639 "adrfam": "ipv4", 00:31:53.639 "trsvcid": "4420", 00:31:53.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:53.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:53.639 "hdgst": false, 00:31:53.639 "ddgst": false 00:31:53.639 }, 00:31:53.639 "method": "bdev_nvme_attach_controller" 00:31:53.639 }' 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:53.639 "params": { 00:31:53.639 "name": "Nvme1", 00:31:53.639 "trtype": "tcp", 00:31:53.639 "traddr": "10.0.0.2", 00:31:53.639 "adrfam": "ipv4", 00:31:53.639 "trsvcid": "4420", 00:31:53.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:53.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:53.639 "hdgst": false, 00:31:53.639 "ddgst": false 00:31:53.639 }, 00:31:53.639 "method": "bdev_nvme_attach_controller" 00:31:53.639 }' 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:53.639 13:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:53.640 "params": { 00:31:53.640 "name": "Nvme1", 00:31:53.640 "trtype": "tcp", 00:31:53.640 "traddr": "10.0.0.2", 00:31:53.640 "adrfam": "ipv4", 00:31:53.640 "trsvcid": "4420", 00:31:53.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:53.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:53.640 "hdgst": false, 00:31:53.640 "ddgst": false 00:31:53.640 }, 00:31:53.640 "method": "bdev_nvme_attach_controller" 00:31:53.640 }' 00:31:53.640 [2024-11-29 13:15:53.399338] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:31:53.640 [2024-11-29 13:15:53.399337] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:31:53.640 [2024-11-29 13:15:53.399387] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-29 13:15:53.399388] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:53.640 --proc-type=auto ] 00:31:53.640 [2024-11-29 13:15:53.401637] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:31:53.640 [2024-11-29 13:15:53.401681] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:53.640 [2024-11-29 13:15:53.402407] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:31:53.640 [2024-11-29 13:15:53.402447] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:53.898 [2024-11-29 13:15:53.598612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.898 [2024-11-29 13:15:53.641671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:53.898 [2024-11-29 13:15:53.691170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.155 [2024-11-29 13:15:53.743840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.155 [2024-11-29 13:15:53.746295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:54.155 [2024-11-29 13:15:53.786852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:54.155 [2024-11-29 13:15:53.799268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.155 [2024-11-29 13:15:53.841982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:54.155 Running I/O for 1 seconds... 00:31:54.155 Running I/O for 1 seconds... 00:31:54.412 Running I/O for 1 seconds... 00:31:54.412 Running I/O for 1 seconds... 00:31:55.347 11942.00 IOPS, 46.65 MiB/s 00:31:55.347 Latency(us) 00:31:55.347 [2024-11-29T12:15:55.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.347 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:55.348 Nvme1n1 : 1.01 12000.82 46.88 0.00 0.00 10630.76 1538.67 12765.27 00:31:55.348 [2024-11-29T12:15:55.168Z] =================================================================================================================== 00:31:55.348 [2024-11-29T12:15:55.168Z] Total : 12000.82 46.88 0.00 0.00 10630.76 1538.67 12765.27 00:31:55.348 237472.00 IOPS, 927.62 MiB/s 00:31:55.348 Latency(us) 00:31:55.348 [2024-11-29T12:15:55.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.348 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:55.348 Nvme1n1 : 1.00 237099.46 926.17 0.00 0.00 537.26 227.06 1545.79 00:31:55.348 [2024-11-29T12:15:55.168Z] =================================================================================================================== 00:31:55.348 [2024-11-29T12:15:55.168Z] Total : 237099.46 926.17 0.00 0.00 537.26 227.06 1545.79 00:31:55.348 11191.00 IOPS, 43.71 MiB/s 00:31:55.348 Latency(us) 00:31:55.348 [2024-11-29T12:15:55.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.348 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:55.348 Nvme1n1 : 1.01 11269.12 44.02 0.00 0.00 11326.80 1866.35 14019.01 00:31:55.348 [2024-11-29T12:15:55.168Z] =================================================================================================================== 00:31:55.348 [2024-11-29T12:15:55.168Z] Total : 11269.12 44.02 0.00 0.00 11326.80 1866.35 14019.01 00:31:55.348 10147.00 IOPS, 39.64 MiB/s [2024-11-29T12:15:55.168Z] 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2196054 00:31:55.348 00:31:55.348 Latency(us) 00:31:55.348 [2024-11-29T12:15:55.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.348 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:55.348 Nvme1n1 : 1.05 9820.57 38.36 0.00 0.00 12494.38 4074.63 45362.31 00:31:55.348 [2024-11-29T12:15:55.168Z] =================================================================================================================== 00:31:55.348 [2024-11-29T12:15:55.168Z] Total : 9820.57 38.36 0.00 0.00 12494.38 4074.63 45362.31 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2196056 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2196059 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:55.607 rmmod nvme_tcp 00:31:55.607 rmmod nvme_fabrics 00:31:55.607 rmmod nvme_keyring 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2196030 ']' 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2196030 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2196030 ']' 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2196030 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2196030 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2196030' 00:31:55.607 killing process with pid 2196030 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2196030 00:31:55.607 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2196030 00:31:55.866 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:55.866 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:55.866 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:55.866 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:55.866 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:55.866 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:55.866 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:55.866 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:55.866 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:55.866 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.866 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.866 13:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.770 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:57.770 00:31:57.770 real 0m10.348s 00:31:57.770 user 0m14.478s 00:31:57.770 sys 0m6.268s 00:31:57.770 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:57.770 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:57.770 ************************************ 00:31:57.770 END TEST nvmf_bdev_io_wait 00:31:57.770 ************************************ 00:31:57.770 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:57.770 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:57.770 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:57.770 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:57.770 ************************************ 00:31:57.770 START TEST nvmf_queue_depth 00:31:57.770 ************************************ 00:31:57.770 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:58.030 * Looking for test storage... 00:31:58.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:58.030 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:58.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.030 --rc genhtml_branch_coverage=1 00:31:58.030 --rc genhtml_function_coverage=1 00:31:58.030 --rc genhtml_legend=1 00:31:58.030 --rc geninfo_all_blocks=1 00:31:58.030 --rc geninfo_unexecuted_blocks=1 00:31:58.030 00:31:58.031 ' 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:58.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.031 --rc genhtml_branch_coverage=1 00:31:58.031 --rc genhtml_function_coverage=1 00:31:58.031 --rc genhtml_legend=1 00:31:58.031 --rc geninfo_all_blocks=1 00:31:58.031 --rc geninfo_unexecuted_blocks=1 00:31:58.031 00:31:58.031 ' 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:58.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.031 --rc genhtml_branch_coverage=1 00:31:58.031 --rc genhtml_function_coverage=1 00:31:58.031 --rc genhtml_legend=1 00:31:58.031 --rc geninfo_all_blocks=1 00:31:58.031 --rc geninfo_unexecuted_blocks=1 00:31:58.031 00:31:58.031 ' 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:58.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.031 --rc genhtml_branch_coverage=1 00:31:58.031 --rc genhtml_function_coverage=1 00:31:58.031 --rc genhtml_legend=1 00:31:58.031 --rc geninfo_all_blocks=1 00:31:58.031 --rc geninfo_unexecuted_blocks=1 00:31:58.031 00:31:58.031 ' 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:58.031 13:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:03.308 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:03.308 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.308 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:03.309 Found net devices under 0000:86:00.0: cvl_0_0 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:03.309 Found net devices under 0000:86:00.1: cvl_0_1 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:03.309 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:03.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:03.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:32:03.625 00:32:03.625 --- 10.0.0.2 ping statistics --- 00:32:03.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.625 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:03.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:03.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:32:03.625 00:32:03.625 --- 10.0.0.1 ping statistics --- 00:32:03.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.625 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2199832 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2199832 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2199832 ']' 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:03.625 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:03.625 [2024-11-29 13:16:03.363221] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:03.625 [2024-11-29 13:16:03.364202] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:32:03.625 [2024-11-29 13:16:03.364241] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:03.933 [2024-11-29 13:16:03.436561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.933 [2024-11-29 13:16:03.478689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:03.933 [2024-11-29 13:16:03.478722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:03.933 [2024-11-29 13:16:03.478729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:03.933 [2024-11-29 13:16:03.478736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:03.933 [2024-11-29 13:16:03.478740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:03.933 [2024-11-29 13:16:03.479306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:03.933 [2024-11-29 13:16:03.547170] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:03.933 [2024-11-29 13:16:03.547384] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:03.934 [2024-11-29 13:16:03.612006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:03.934 Malloc0 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:03.934 [2024-11-29 13:16:03.683880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2199857 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2199857 /var/tmp/bdevperf.sock 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2199857 ']' 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:03.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:03.934 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:03.934 [2024-11-29 13:16:03.731665] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:32:03.934 [2024-11-29 13:16:03.731710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2199857 ] 00:32:04.255 [2024-11-29 13:16:03.793238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.255 [2024-11-29 13:16:03.836879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.255 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:04.255 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:04.255 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:04.255 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.255 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:04.255 NVMe0n1 00:32:04.255 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.255 13:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:04.514 Running I/O for 10 seconds... 00:32:06.412 11285.00 IOPS, 44.08 MiB/s [2024-11-29T12:16:07.169Z] 11774.00 IOPS, 45.99 MiB/s [2024-11-29T12:16:08.106Z] 11781.33 IOPS, 46.02 MiB/s [2024-11-29T12:16:09.484Z] 11828.75 IOPS, 46.21 MiB/s [2024-11-29T12:16:10.422Z] 11889.60 IOPS, 46.44 MiB/s [2024-11-29T12:16:11.361Z] 11947.00 IOPS, 46.67 MiB/s [2024-11-29T12:16:12.299Z] 11943.00 IOPS, 46.65 MiB/s [2024-11-29T12:16:13.237Z] 11980.25 IOPS, 46.80 MiB/s [2024-11-29T12:16:14.174Z] 12026.00 IOPS, 46.98 MiB/s [2024-11-29T12:16:14.433Z] 12060.00 IOPS, 47.11 MiB/s 00:32:14.613 Latency(us) 00:32:14.613 [2024-11-29T12:16:14.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.614 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:14.614 Verification LBA range: start 0x0 length 0x4000 00:32:14.614 NVMe0n1 : 10.07 12072.33 47.16 0.00 0.00 84503.43 19717.79 53796.51 00:32:14.614 [2024-11-29T12:16:14.434Z] =================================================================================================================== 00:32:14.614 [2024-11-29T12:16:14.434Z] Total : 12072.33 47.16 0.00 0.00 84503.43 19717.79 53796.51 00:32:14.614 { 00:32:14.614 "results": [ 00:32:14.614 { 00:32:14.614 "job": "NVMe0n1", 00:32:14.614 "core_mask": "0x1", 00:32:14.614 "workload": "verify", 00:32:14.614 "status": "finished", 00:32:14.614 "verify_range": { 00:32:14.614 "start": 0, 00:32:14.614 "length": 16384 00:32:14.614 }, 00:32:14.614 "queue_depth": 1024, 00:32:14.614 "io_size": 4096, 00:32:14.614 "runtime": 10.065415, 00:32:14.614 "iops": 12072.328860757356, 00:32:14.614 "mibps": 47.15753461233342, 00:32:14.614 "io_failed": 0, 00:32:14.614 "io_timeout": 0, 00:32:14.614 "avg_latency_us": 84503.43037266006, 00:32:14.614 "min_latency_us": 19717.787826086955, 00:32:14.614 "max_latency_us": 53796.507826086956 00:32:14.614 } 00:32:14.614 ], 00:32:14.614 "core_count": 1 00:32:14.614 } 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2199857 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2199857 ']' 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2199857 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2199857 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2199857' 00:32:14.614 killing process with pid 2199857 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2199857 00:32:14.614 Received shutdown signal, test time was about 10.000000 seconds 00:32:14.614 00:32:14.614 Latency(us) 00:32:14.614 [2024-11-29T12:16:14.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.614 [2024-11-29T12:16:14.434Z] =================================================================================================================== 00:32:14.614 [2024-11-29T12:16:14.434Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2199857 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:14.614 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:14.614 rmmod nvme_tcp 00:32:14.873 rmmod nvme_fabrics 00:32:14.873 rmmod nvme_keyring 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2199832 ']' 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2199832 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2199832 ']' 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2199832 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2199832 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2199832' 00:32:14.873 killing process with pid 2199832 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2199832 00:32:14.873 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2199832 00:32:15.133 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:15.133 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:15.133 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:15.133 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:15.133 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:32:15.133 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:15.133 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:32:15.133 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:15.133 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:15.133 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.133 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.133 13:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.037 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:17.038 00:32:17.038 real 0m19.191s 00:32:17.038 user 0m22.568s 00:32:17.038 sys 0m5.846s 00:32:17.038 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.038 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:17.038 ************************************ 00:32:17.038 END TEST nvmf_queue_depth 00:32:17.038 ************************************ 00:32:17.038 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:17.038 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:17.038 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.038 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:17.038 ************************************ 00:32:17.038 START TEST nvmf_target_multipath 00:32:17.038 ************************************ 00:32:17.038 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:17.297 * Looking for test storage... 00:32:17.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:17.297 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:17.297 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:32:17.297 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:17.297 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:17.297 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:17.298 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:17.298 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:17.298 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:17.298 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:17.298 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:17.298 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:17.298 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:17.298 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:17.298 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:17.298 13:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:17.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.298 --rc genhtml_branch_coverage=1 00:32:17.298 --rc genhtml_function_coverage=1 00:32:17.298 --rc genhtml_legend=1 00:32:17.298 --rc geninfo_all_blocks=1 00:32:17.298 --rc geninfo_unexecuted_blocks=1 00:32:17.298 00:32:17.298 ' 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:17.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.298 --rc genhtml_branch_coverage=1 00:32:17.298 --rc genhtml_function_coverage=1 00:32:17.298 --rc genhtml_legend=1 00:32:17.298 --rc geninfo_all_blocks=1 00:32:17.298 --rc geninfo_unexecuted_blocks=1 00:32:17.298 00:32:17.298 ' 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:17.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.298 --rc genhtml_branch_coverage=1 00:32:17.298 --rc genhtml_function_coverage=1 00:32:17.298 --rc genhtml_legend=1 00:32:17.298 --rc geninfo_all_blocks=1 00:32:17.298 --rc geninfo_unexecuted_blocks=1 00:32:17.298 00:32:17.298 ' 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:17.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.298 --rc genhtml_branch_coverage=1 00:32:17.298 --rc genhtml_function_coverage=1 00:32:17.298 --rc genhtml_legend=1 00:32:17.298 --rc geninfo_all_blocks=1 00:32:17.298 --rc geninfo_unexecuted_blocks=1 00:32:17.298 00:32:17.298 ' 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:17.298 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:17.299 13:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:22.575 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:22.575 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:22.575 Found net devices under 0000:86:00.0: cvl_0_0 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:22.575 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:22.576 Found net devices under 0000:86:00.1: cvl_0_1 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:22.576 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:22.835 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:22.835 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:22.835 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:22.835 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:22.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:22.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:32:22.835 00:32:22.835 --- 10.0.0.2 ping statistics --- 00:32:22.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.835 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:32:22.835 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:22.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:22.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:32:22.835 00:32:22.835 --- 10.0.0.1 ping statistics --- 00:32:22.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.835 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:32:22.835 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:22.835 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:32:22.835 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:22.835 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:22.836 only one NIC for nvmf test 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:22.836 rmmod nvme_tcp 00:32:22.836 rmmod nvme_fabrics 00:32:22.836 rmmod nvme_keyring 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.836 13:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:25.373 00:32:25.373 real 0m7.802s 00:32:25.373 user 0m1.662s 00:32:25.373 sys 0m4.160s 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:25.373 ************************************ 00:32:25.373 END TEST nvmf_target_multipath 00:32:25.373 ************************************ 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:25.373 ************************************ 00:32:25.373 START TEST nvmf_zcopy 00:32:25.373 ************************************ 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:25.373 * Looking for test storage... 00:32:25.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.373 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:25.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.374 --rc genhtml_branch_coverage=1 00:32:25.374 --rc genhtml_function_coverage=1 00:32:25.374 --rc genhtml_legend=1 00:32:25.374 --rc geninfo_all_blocks=1 00:32:25.374 --rc geninfo_unexecuted_blocks=1 00:32:25.374 00:32:25.374 ' 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:25.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.374 --rc genhtml_branch_coverage=1 00:32:25.374 --rc genhtml_function_coverage=1 00:32:25.374 --rc genhtml_legend=1 00:32:25.374 --rc geninfo_all_blocks=1 00:32:25.374 --rc geninfo_unexecuted_blocks=1 00:32:25.374 00:32:25.374 ' 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:25.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.374 --rc genhtml_branch_coverage=1 00:32:25.374 --rc genhtml_function_coverage=1 00:32:25.374 --rc genhtml_legend=1 00:32:25.374 --rc geninfo_all_blocks=1 00:32:25.374 --rc geninfo_unexecuted_blocks=1 00:32:25.374 00:32:25.374 ' 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:25.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.374 --rc genhtml_branch_coverage=1 00:32:25.374 --rc genhtml_function_coverage=1 00:32:25.374 --rc genhtml_legend=1 00:32:25.374 --rc geninfo_all_blocks=1 00:32:25.374 --rc geninfo_unexecuted_blocks=1 00:32:25.374 00:32:25.374 ' 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:25.374 13:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:30.647 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:30.647 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:30.647 Found net devices under 0000:86:00.0: cvl_0_0 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.647 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:30.648 Found net devices under 0000:86:00.1: cvl_0_1 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:30.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:32:30.648 00:32:30.648 --- 10.0.0.2 ping statistics --- 00:32:30.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.648 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:32:30.648 00:32:30.648 --- 10.0.0.1 ping statistics --- 00:32:30.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.648 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2208498 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2208498 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2208498 ']' 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:30.648 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:30.908 [2024-11-29 13:16:30.514279] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:30.908 [2024-11-29 13:16:30.515296] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:32:30.908 [2024-11-29 13:16:30.515337] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.908 [2024-11-29 13:16:30.581403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.908 [2024-11-29 13:16:30.622399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.908 [2024-11-29 13:16:30.622434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.908 [2024-11-29 13:16:30.622442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.908 [2024-11-29 13:16:30.622448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.908 [2024-11-29 13:16:30.622453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.908 [2024-11-29 13:16:30.623013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.908 [2024-11-29 13:16:30.690677] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:30.908 [2024-11-29 13:16:30.690894] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:30.908 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.908 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:30.908 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:30.908 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:30.908 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:31.167 [2024-11-29 13:16:30.751453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:31.167 [2024-11-29 13:16:30.767588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.167 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:31.168 malloc0 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:31.168 { 00:32:31.168 "params": { 00:32:31.168 "name": "Nvme$subsystem", 00:32:31.168 "trtype": "$TEST_TRANSPORT", 00:32:31.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.168 "adrfam": "ipv4", 00:32:31.168 "trsvcid": "$NVMF_PORT", 00:32:31.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.168 "hdgst": ${hdgst:-false}, 00:32:31.168 "ddgst": ${ddgst:-false} 00:32:31.168 }, 00:32:31.168 "method": "bdev_nvme_attach_controller" 00:32:31.168 } 00:32:31.168 EOF 00:32:31.168 )") 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:31.168 13:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:31.168 "params": { 00:32:31.168 "name": "Nvme1", 00:32:31.168 "trtype": "tcp", 00:32:31.168 "traddr": "10.0.0.2", 00:32:31.168 "adrfam": "ipv4", 00:32:31.168 "trsvcid": "4420", 00:32:31.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:31.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:31.168 "hdgst": false, 00:32:31.168 "ddgst": false 00:32:31.168 }, 00:32:31.168 "method": "bdev_nvme_attach_controller" 00:32:31.168 }' 00:32:31.168 [2024-11-29 13:16:30.845762] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:32:31.168 [2024-11-29 13:16:30.845805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2208526 ] 00:32:31.168 [2024-11-29 13:16:30.906585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.168 [2024-11-29 13:16:30.948018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.427 Running I/O for 10 seconds... 00:32:33.374 8293.00 IOPS, 64.79 MiB/s [2024-11-29T12:16:34.569Z] 8324.00 IOPS, 65.03 MiB/s [2024-11-29T12:16:35.505Z] 8356.33 IOPS, 65.28 MiB/s [2024-11-29T12:16:36.440Z] 8370.75 IOPS, 65.40 MiB/s [2024-11-29T12:16:37.376Z] 8379.40 IOPS, 65.46 MiB/s [2024-11-29T12:16:38.313Z] 8380.50 IOPS, 65.47 MiB/s [2024-11-29T12:16:39.247Z] 8383.71 IOPS, 65.50 MiB/s [2024-11-29T12:16:40.624Z] 8395.75 IOPS, 65.59 MiB/s [2024-11-29T12:16:41.559Z] 8391.56 IOPS, 65.56 MiB/s [2024-11-29T12:16:41.559Z] 8394.80 IOPS, 65.58 MiB/s 00:32:41.739 Latency(us) 00:32:41.739 [2024-11-29T12:16:41.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.739 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:41.739 Verification LBA range: start 0x0 length 0x1000 00:32:41.739 Nvme1n1 : 10.01 8399.48 65.62 0.00 0.00 15195.85 990.16 21883.33 00:32:41.739 [2024-11-29T12:16:41.559Z] =================================================================================================================== 00:32:41.739 [2024-11-29T12:16:41.559Z] Total : 8399.48 65.62 0.00 0.00 15195.85 990.16 21883.33 00:32:41.739 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2210137 00:32:41.739 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:41.739 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:41.739 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:41.739 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:41.739 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:41.739 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:41.739 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:41.739 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:41.739 { 00:32:41.739 "params": { 00:32:41.739 "name": "Nvme$subsystem", 00:32:41.739 "trtype": "$TEST_TRANSPORT", 00:32:41.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:41.739 "adrfam": "ipv4", 00:32:41.739 "trsvcid": "$NVMF_PORT", 00:32:41.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:41.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:41.739 "hdgst": ${hdgst:-false}, 00:32:41.739 "ddgst": ${ddgst:-false} 00:32:41.739 }, 00:32:41.739 "method": "bdev_nvme_attach_controller" 00:32:41.739 } 00:32:41.739 EOF 00:32:41.739 )") 00:32:41.739 [2024-11-29 13:16:41.383362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.739 [2024-11-29 13:16:41.383394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.739 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:41.739 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:41.739 [2024-11-29 13:16:41.391324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.739 [2024-11-29 13:16:41.391337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.739 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:41.739 13:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:41.739 "params": { 00:32:41.739 "name": "Nvme1", 00:32:41.740 "trtype": "tcp", 00:32:41.740 "traddr": "10.0.0.2", 00:32:41.740 "adrfam": "ipv4", 00:32:41.740 "trsvcid": "4420", 00:32:41.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.740 "hdgst": false, 00:32:41.740 "ddgst": false 00:32:41.740 }, 00:32:41.740 "method": "bdev_nvme_attach_controller" 00:32:41.740 }' 00:32:41.740 [2024-11-29 13:16:41.399315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.399331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.407315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.407326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.415315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.415326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.423315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.423326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.424411] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:32:41.740 [2024-11-29 13:16:41.424455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210137 ] 00:32:41.740 [2024-11-29 13:16:41.431316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.431327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.439315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.439325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.447315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.447326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.455316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.455327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.463316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.463326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.471315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.471325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.479315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.479325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.486255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.740 [2024-11-29 13:16:41.487315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.487326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.495316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.495329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.503316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.503329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.511316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.511327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.519324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.519335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.527317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.527333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.528136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.740 [2024-11-29 13:16:41.535317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.535329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.543326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.543347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.740 [2024-11-29 13:16:41.551323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.740 [2024-11-29 13:16:41.551340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.559322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.559335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.567318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.567331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.575316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.575328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.583321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.583336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.591319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.591332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.599316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.599328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.607316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.607326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.615363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.615387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.623322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.623337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.631319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.631333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.639323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.639341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.647316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.647327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.655326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.655337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.663315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.663326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.671315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.671326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.679317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.679334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.687318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.687332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.695320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.695334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.703316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.703326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.711315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.711326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.719314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.719325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.998 [2024-11-29 13:16:41.727315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.998 [2024-11-29 13:16:41.727326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.999 [2024-11-29 13:16:41.735319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.999 [2024-11-29 13:16:41.735334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.999 [2024-11-29 13:16:41.743316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.999 [2024-11-29 13:16:41.743326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.999 [2024-11-29 13:16:41.751315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.999 [2024-11-29 13:16:41.751325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.999 [2024-11-29 13:16:41.759315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.999 [2024-11-29 13:16:41.759326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.999 [2024-11-29 13:16:41.767314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.999 [2024-11-29 13:16:41.767325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.999 [2024-11-29 13:16:41.775315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.999 [2024-11-29 13:16:41.775327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.999 [2024-11-29 13:16:41.783317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.999 [2024-11-29 13:16:41.783329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.999 [2024-11-29 13:16:41.791314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.999 [2024-11-29 13:16:41.791324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.999 [2024-11-29 13:16:41.799316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.999 [2024-11-29 13:16:41.799326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.999 [2024-11-29 13:16:41.807316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.999 [2024-11-29 13:16:41.807326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:41.999 [2024-11-29 13:16:41.815318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:41.999 [2024-11-29 13:16:41.815327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.823317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.823328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.831324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.831342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.839318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.839330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 Running I/O for 5 seconds... 00:32:42.257 [2024-11-29 13:16:41.853438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.853459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.860731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.860750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.870197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.870216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.876760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.876778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.886390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.886408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.893314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.893333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.901720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.901739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.909561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.909580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.917216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.917235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.924991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.925009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.934939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.934964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.941661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.941679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.950008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.950026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.957493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.957512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.964917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.964936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.974560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.974579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.981287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.981305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.989662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.989681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:41.997352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:41.997372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:42.007221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:42.007241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:42.014078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:42.014096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:42.022423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:42.022441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:42.029868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:42.029886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:42.037309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:42.037328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:42.046727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:42.046750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:42.053623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:42.053641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.257 [2024-11-29 13:16:42.062165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.257 [2024-11-29 13:16:42.062184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.076821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.076841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.085835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.085853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.092788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.092806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.104125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.104143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.116663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.116682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.126496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.126515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.133406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.133423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.141746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.141765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.149227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.149247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.158565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.158584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.165395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.165413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.173699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.173718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.181231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.181249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.190207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.190226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.196796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.196814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.206543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.206562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.213269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.213287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.221543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.221561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.228998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.229016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.237326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.237345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.245411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.245430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.253177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.253196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.261126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.261144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.270310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.270329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.277055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.277073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.287652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.287670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.301222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.301241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.307991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.308014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.319108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.319126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.516 [2024-11-29 13:16:42.325905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.516 [2024-11-29 13:16:42.325923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.517 [2024-11-29 13:16:42.334359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.517 [2024-11-29 13:16:42.334378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.342055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.342073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.350136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.350163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.357792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.357810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.365277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.365296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.374629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.374647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.381146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.381164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.390736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.390754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.397583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.397602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.405886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.405904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.413623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.413642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.421153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.421171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.430884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.430903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.437821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.437839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.446266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.446285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.454113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.454139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.462004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.462026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.468711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.468729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.479053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.479072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.486002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.486022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.494335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.494354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.501944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.501966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.509686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.509704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.517482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.517500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.525321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.525340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.534276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.534294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.541060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.541078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.551288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.551306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.558044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.558072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.566503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.566522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.574194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.574213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.581839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.775 [2024-11-29 13:16:42.581856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:42.775 [2024-11-29 13:16:42.589779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:42.776 [2024-11-29 13:16:42.589797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.597641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.597659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.605435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.605453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.612779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.612802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.622558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.622577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.629348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.629366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.637781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.637799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.645867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.645886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.653005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.653023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.662731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.662750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.670124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.670153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.678641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.678659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.686179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.686197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.693545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.693563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.703154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.703173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.710174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.710192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.718486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.718504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.725666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.725684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.733833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.733852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.740592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.740611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.751663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.751683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.764965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.764984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.772686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.772710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.782255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.782275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.789270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.789289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.797566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.797585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.804836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.804855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.814415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.814435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.821097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.821115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.831318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.831339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.838347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.838367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.035 [2024-11-29 13:16:42.846977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.035 [2024-11-29 13:16:42.846995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.294 16358.00 IOPS, 127.80 MiB/s [2024-11-29T12:16:43.114Z] [2024-11-29 13:16:42.858799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.294 [2024-11-29 13:16:42.858817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.294 [2024-11-29 13:16:42.865670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.294 [2024-11-29 13:16:42.865689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.294 [2024-11-29 13:16:42.874049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.294 [2024-11-29 13:16:42.874067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.294 [2024-11-29 13:16:42.881710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.294 [2024-11-29 13:16:42.881729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.294 [2024-11-29 13:16:42.889474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.294 [2024-11-29 13:16:42.889492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.294 [2024-11-29 13:16:42.897170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.294 [2024-11-29 13:16:42.897189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.294 [2024-11-29 13:16:42.904910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.294 [2024-11-29 13:16:42.904929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.294 [2024-11-29 13:16:42.914678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.294 [2024-11-29 13:16:42.914697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.294 [2024-11-29 13:16:42.921888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.294 [2024-11-29 13:16:42.921907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.294 [2024-11-29 13:16:42.930409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.294 [2024-11-29 13:16:42.930428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.294 [2024-11-29 13:16:42.945145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.294 [2024-11-29 13:16:42.945164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.294 [2024-11-29 13:16:42.954197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:42.954216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:42.960942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:42.960968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:42.970611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:42.970630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:42.977550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:42.977570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:42.985773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:42.985793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:42.992724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:42.992743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.002382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.002402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.009125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.009143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.017532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.017550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.025190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.025210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.035007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.035028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.041945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.041970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.050540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.050559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.058502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.058520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.066097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.066116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.073890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.073909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.081685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.081704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.089787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.089806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.096929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.096954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.106617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.106636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.295 [2024-11-29 13:16:43.113265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.295 [2024-11-29 13:16:43.113285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.121807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.121826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.129405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.129423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.137139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.137158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.146884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.146903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.153720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.153739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.162118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.162137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.170021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.170040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.178112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.178132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.185572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.185593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.195037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.195056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.201951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.201971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.210346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.210365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.217828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.217846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.226085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.226103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.234006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.234024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.242107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.242126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.249911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.249930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.257754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.257773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.265437] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.265455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.275091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.275110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.282017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.282035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.290363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.290382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.297193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.297223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.305702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.305727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.313654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.313673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.321409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.321427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.329498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.329517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.337380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.337398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.345213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.345231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.352972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.352991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.361321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.361340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.554 [2024-11-29 13:16:43.369297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.554 [2024-11-29 13:16:43.369315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.377308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.377326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.385279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.385301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.393220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.393240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.401136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.401153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.408799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.408817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.418932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.418955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.425976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.425994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.434742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.434760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.442819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.442838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.450580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.450598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.458399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.458417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.466418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.466436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.480505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.480525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.492023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.492041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.505047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.505066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.512767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.512785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.522241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.522259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.529173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.529191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.537732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.537750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.545618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.545636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.553312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.553336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.563088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.563106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.570194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.570223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.578482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.578500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.586228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.586246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.594298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.594316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.602094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.602113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.610003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.610022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.617458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.617477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:43.814 [2024-11-29 13:16:43.625638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:43.814 [2024-11-29 13:16:43.625657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.633559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.633579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.641504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.641522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.649313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.649331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.657100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.657119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.664963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.664981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.674628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.674646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.681764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.681782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.691338] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.691356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.698033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.698051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.706512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.706536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.713922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.713941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.722169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.722187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.729819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.729837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.738327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.738344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.746367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.746385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.761323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.761341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.768765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.768783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.778163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.778182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.784930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.784954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.794822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.794840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.801513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.801531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.810070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.810089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.817529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.817548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.825718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.825738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.833389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.833407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.841002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.841021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.849960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.849979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 16284.50 IOPS, 127.22 MiB/s [2024-11-29T12:16:43.894Z] [2024-11-29 13:16:43.857588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.857606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.865760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.865778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.873454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.873472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.880999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.881017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.074 [2024-11-29 13:16:43.890684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.074 [2024-11-29 13:16:43.890703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.333 [2024-11-29 13:16:43.897690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:43.897707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:43.906090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:43.906109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:43.913641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:43.913660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:43.921318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:43.921337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:43.930868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:43.930887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:43.937877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:43.937896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:43.946447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:43.946465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:43.953678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:43.953697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:43.962021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:43.962040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:43.969570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:43.969588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:43.977250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:43.977268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:43.985578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:43.985597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:43.993132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:43.993151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.002791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.002809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.009870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.009888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.018410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.018428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.026435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.026454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.040763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.040783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.051296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.051316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.058123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.058144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.066487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.066507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.073908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.073926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.082173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.082192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.090075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.090093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.097883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.097902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.105794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.105812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.113763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.113781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.121918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.121936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.129135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.129154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.138760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.138779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.334 [2024-11-29 13:16:44.145408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.334 [2024-11-29 13:16:44.145426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.154219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.154239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.162236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.162254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.170440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.170459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.178314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.178332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.192410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.192429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.203733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.203751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.217531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.217551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.224599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.224618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.234353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.234372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.241265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.241283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.251221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.251240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.258429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.258449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.266980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.266999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.593 [2024-11-29 13:16:44.274523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.593 [2024-11-29 13:16:44.274542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.282598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.282617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.290343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.290362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.298396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.298416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.305631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.305650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.313995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.314013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.321583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.321602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.329268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.329287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.337286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.337304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.345207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.345226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.354408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.354428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.361112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.361131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.370903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.370922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.378064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.378083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.386651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.386670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.394410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.394429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.594 [2024-11-29 13:16:44.409082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.594 [2024-11-29 13:16:44.409101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.853 [2024-11-29 13:16:44.416471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.853 [2024-11-29 13:16:44.416489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.853 [2024-11-29 13:16:44.426554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.853 [2024-11-29 13:16:44.426573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.853 [2024-11-29 13:16:44.433241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.853 [2024-11-29 13:16:44.433258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.853 [2024-11-29 13:16:44.441741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.853 [2024-11-29 13:16:44.441761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.853 [2024-11-29 13:16:44.449518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.853 [2024-11-29 13:16:44.449537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.853 [2024-11-29 13:16:44.456590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.853 [2024-11-29 13:16:44.456608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.853 [2024-11-29 13:16:44.466454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.853 [2024-11-29 13:16:44.466473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.853 [2024-11-29 13:16:44.473341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.853 [2024-11-29 13:16:44.473360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.853 [2024-11-29 13:16:44.482230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.853 [2024-11-29 13:16:44.482249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.853 [2024-11-29 13:16:44.497063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.853 [2024-11-29 13:16:44.497083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.853 [2024-11-29 13:16:44.505966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.853 [2024-11-29 13:16:44.505988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.853 [2024-11-29 13:16:44.513064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.853 [2024-11-29 13:16:44.513082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.523065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.523084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.530042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.530061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.538647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.538666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.546290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.546309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.554123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.554153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.562302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.562320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.570549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.570567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.577808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.577827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.587140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.587159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.593920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.593939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.602560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.602578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.610354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.610373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.624730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.624749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.633881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.633901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.640624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.640643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.651301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.651320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.658257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.658277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:44.854 [2024-11-29 13:16:44.666795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:44.854 [2024-11-29 13:16:44.666818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.674421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.674439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.682887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.682906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.690453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.690472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.698717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.698735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.706266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.706283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.714332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.714351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.722226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.722245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.730213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.730232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.738439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.738458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.752549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.752568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.763757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.763775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.777330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.777349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.784382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.784401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.795461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.795479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.802245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.802264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.810722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.810740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.818660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.818679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.826450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.826468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.834542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.834565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.841777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.841795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.850309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.850328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 16232.00 IOPS, 126.81 MiB/s [2024-11-29T12:16:44.933Z] [2024-11-29 13:16:44.858137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.858156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.866389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.866407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.874217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.874236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.881605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.881623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.890079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.890097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.113 [2024-11-29 13:16:44.897984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.113 [2024-11-29 13:16:44.898002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.114 [2024-11-29 13:16:44.905989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.114 [2024-11-29 13:16:44.906008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.114 [2024-11-29 13:16:44.913755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.114 [2024-11-29 13:16:44.913774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.114 [2024-11-29 13:16:44.921611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.114 [2024-11-29 13:16:44.921630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.114 [2024-11-29 13:16:44.930092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.114 [2024-11-29 13:16:44.930111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.374 [2024-11-29 13:16:44.937917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.374 [2024-11-29 13:16:44.937936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.374 [2024-11-29 13:16:44.945601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.374 [2024-11-29 13:16:44.945621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.374 [2024-11-29 13:16:44.952646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.374 [2024-11-29 13:16:44.952665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.374 [2024-11-29 13:16:44.962498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.374 [2024-11-29 13:16:44.962516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.374 [2024-11-29 13:16:44.969075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.374 [2024-11-29 13:16:44.969092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.374 [2024-11-29 13:16:44.978710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.374 [2024-11-29 13:16:44.978729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.374 [2024-11-29 13:16:44.985332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.374 [2024-11-29 13:16:44.985351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.374 [2024-11-29 13:16:44.993690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.374 [2024-11-29 13:16:44.993710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.374 [2024-11-29 13:16:45.001533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.374 [2024-11-29 13:16:45.001552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.374 [2024-11-29 13:16:45.009121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.374 [2024-11-29 13:16:45.009139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.374 [2024-11-29 13:16:45.017224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.374 [2024-11-29 13:16:45.017242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.025111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.025128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.034182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.034200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.040873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.040891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.051538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.051556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.058517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.058537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.066958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.066977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.074692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.074711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.082397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.082414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.089667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.089686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.097679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.097697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.105396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.105414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.112979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.112997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.122361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.122379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.129264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.129282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.375 [2024-11-29 13:16:45.137827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.375 [2024-11-29 13:16:45.137846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.376 [2024-11-29 13:16:45.145753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.376 [2024-11-29 13:16:45.145771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.376 [2024-11-29 13:16:45.153483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.376 [2024-11-29 13:16:45.153500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.376 [2024-11-29 13:16:45.161342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.376 [2024-11-29 13:16:45.161361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.376 [2024-11-29 13:16:45.169371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.376 [2024-11-29 13:16:45.169390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.376 [2024-11-29 13:16:45.177507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.376 [2024-11-29 13:16:45.177525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.376 [2024-11-29 13:16:45.186571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.376 [2024-11-29 13:16:45.186589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.193717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.193736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.202501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.202520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.210266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.210285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.217829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.217847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.225941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.225965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.233538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.233556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.241457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.241476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.249011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.249030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.257137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.257155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.264777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.264797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.274807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.274826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.281869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.281888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.290196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.290214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.297712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.297731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.305883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.305902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.313367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.313386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.322791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.322810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.329667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.329686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.338179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.338198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.346027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.346045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.353767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.353785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.361589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.361608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.369031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.369050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.636 [2024-11-29 13:16:45.378472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.636 [2024-11-29 13:16:45.378492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.637 [2024-11-29 13:16:45.385624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.637 [2024-11-29 13:16:45.385643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.637 [2024-11-29 13:16:45.394237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.637 [2024-11-29 13:16:45.394255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.637 [2024-11-29 13:16:45.402089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.637 [2024-11-29 13:16:45.402107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.637 [2024-11-29 13:16:45.409462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.637 [2024-11-29 13:16:45.409480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.637 [2024-11-29 13:16:45.417510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.637 [2024-11-29 13:16:45.417529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.637 [2024-11-29 13:16:45.426855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.637 [2024-11-29 13:16:45.426874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.637 [2024-11-29 13:16:45.433834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.637 [2024-11-29 13:16:45.433856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.637 [2024-11-29 13:16:45.448726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.637 [2024-11-29 13:16:45.448746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.895 [2024-11-29 13:16:45.459514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.895 [2024-11-29 13:16:45.459532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.895 [2024-11-29 13:16:45.473366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.895 [2024-11-29 13:16:45.473386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.895 [2024-11-29 13:16:45.480294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.895 [2024-11-29 13:16:45.480313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.895 [2024-11-29 13:16:45.491851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.895 [2024-11-29 13:16:45.491869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.895 [2024-11-29 13:16:45.504563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.895 [2024-11-29 13:16:45.504582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.895 [2024-11-29 13:16:45.515853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.895 [2024-11-29 13:16:45.515871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.895 [2024-11-29 13:16:45.528741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.895 [2024-11-29 13:16:45.528760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.895 [2024-11-29 13:16:45.539341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.539360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.546210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.546229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.554717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.554736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.562412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.562431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.570219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.570237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.577820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.577839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.585666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.585684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.593633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.593652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.601473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.601491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.609154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.609172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.616834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.616857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.626459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.626477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.633427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.633445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.642882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.642900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.649577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.649595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.657975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.657994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.665802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.665820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.673961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.673981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.681451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.681470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.690821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.690841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.697804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.697823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.707357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.707376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:45.896 [2024-11-29 13:16:45.714224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:45.896 [2024-11-29 13:16:45.714244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.722711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.155 [2024-11-29 13:16:45.722730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.730527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.155 [2024-11-29 13:16:45.730546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.738438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.155 [2024-11-29 13:16:45.738457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.746236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.155 [2024-11-29 13:16:45.746255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.753995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.155 [2024-11-29 13:16:45.754014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.762039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.155 [2024-11-29 13:16:45.762058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.770142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.155 [2024-11-29 13:16:45.770166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.777938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.155 [2024-11-29 13:16:45.777962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.785931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.155 [2024-11-29 13:16:45.785957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.793840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.155 [2024-11-29 13:16:45.793859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.801304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.155 [2024-11-29 13:16:45.801323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.808767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.155 [2024-11-29 13:16:45.808785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.818703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.155 [2024-11-29 13:16:45.818722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.155 [2024-11-29 13:16:45.825502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.825520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.834087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.834106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.841982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.842000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.849506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.849524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 16243.50 IOPS, 126.90 MiB/s [2024-11-29T12:16:45.976Z] [2024-11-29 13:16:45.857063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.857082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.866371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.866390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.873316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.873334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.881765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.881784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.889237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.889255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.896715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.896733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.907545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.907563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.921073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.921093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.929015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.929038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.938147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.938165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.944778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.944797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.954835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.954854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.961743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.961761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.156 [2024-11-29 13:16:45.970346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.156 [2024-11-29 13:16:45.970365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.415 [2024-11-29 13:16:45.977860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.415 [2024-11-29 13:16:45.977879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.415 [2024-11-29 13:16:45.986034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.415 [2024-11-29 13:16:45.986053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.415 [2024-11-29 13:16:45.993742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.415 [2024-11-29 13:16:45.993761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.415 [2024-11-29 13:16:46.001466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.415 [2024-11-29 13:16:46.001492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.415 [2024-11-29 13:16:46.009708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.415 [2024-11-29 13:16:46.009726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.415 [2024-11-29 13:16:46.017224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.415 [2024-11-29 13:16:46.017243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.415 [2024-11-29 13:16:46.026321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.415 [2024-11-29 13:16:46.026341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.415 [2024-11-29 13:16:46.033074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.415 [2024-11-29 13:16:46.033093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.415 [2024-11-29 13:16:46.042981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.415 [2024-11-29 13:16:46.042999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.415 [2024-11-29 13:16:46.049986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.415 [2024-11-29 13:16:46.050005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.415 [2024-11-29 13:16:46.058492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.415 [2024-11-29 13:16:46.058511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.415 [2024-11-29 13:16:46.065957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.415 [2024-11-29 13:16:46.065976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.415 [2024-11-29 13:16:46.073584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.073604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.081905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.081924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.089763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.089781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.097369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.097388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.105282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.105302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.114900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.114920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.121926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.121944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.130376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.130395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.137797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.137816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.146107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.146125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.153754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.153773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.162068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.162086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.169746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.169764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.177532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.177550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.185629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.185647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.193177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.193195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.203014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.203034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.209545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.209564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.218054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.218073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.225579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.225597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.416 [2024-11-29 13:16:46.233736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.416 [2024-11-29 13:16:46.233754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.242189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.242207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.250221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.250239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.257734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.257752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.265802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.265820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.273509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.273528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.281459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.281477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.289595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.289613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.297721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.297739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.305683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.305701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.313594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.313613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.321439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.321457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.329295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.329314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.338773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.338791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.345635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.345653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.354203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.354222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.362135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.362164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.369833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.369852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.377641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.377663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.385574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.385592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.393267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.393286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.402682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.402701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.409740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.409759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.418199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.418218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.425665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.425683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.433506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.433524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.675 [2024-11-29 13:16:46.441222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.675 [2024-11-29 13:16:46.441239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.676 [2024-11-29 13:16:46.449091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.676 [2024-11-29 13:16:46.449109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.676 [2024-11-29 13:16:46.458714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.676 [2024-11-29 13:16:46.458732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.676 [2024-11-29 13:16:46.465593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.676 [2024-11-29 13:16:46.465611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.676 [2024-11-29 13:16:46.473978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.676 [2024-11-29 13:16:46.473996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.676 [2024-11-29 13:16:46.481355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.676 [2024-11-29 13:16:46.481373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.676 [2024-11-29 13:16:46.491026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.676 [2024-11-29 13:16:46.491045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.498264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.498283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.507148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.507167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.518789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.518808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.525570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.525588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.534152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.534178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.542046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.542065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.550032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.550051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.557928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.557952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.566249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.566268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.573839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.573857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.581498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.581517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.589298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.589317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.597075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.597092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.606561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.606579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.613557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.613576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.622061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.622081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.629917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.629936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.637426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.637446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.645690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.645709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.653480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.653498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.661405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.661423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.670234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.670253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.677458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.677477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.685865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.685888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.693641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.693658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.701477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.701495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.708954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.708972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.718777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.718796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.725769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.725787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.734313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.734330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:46.935 [2024-11-29 13:16:46.742483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:46.935 [2024-11-29 13:16:46.742501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.756964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.756982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.767377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.767395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.774076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.774094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.782639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.782658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.790298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.790316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.803710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.803728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.817346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.817365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.824478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.824497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.835807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.835825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.848796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.848814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 16235.00 IOPS, 126.84 MiB/s [2024-11-29T12:16:47.015Z] [2024-11-29 13:16:46.858875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.858893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.863774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.863796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 00:32:47.195 Latency(us) 00:32:47.195 [2024-11-29T12:16:47.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.195 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:47.195 Nvme1n1 : 5.01 16236.87 126.85 0.00 0.00 7875.86 2165.54 13107.20 00:32:47.195 [2024-11-29T12:16:47.015Z] =================================================================================================================== 00:32:47.195 [2024-11-29T12:16:47.015Z] Total : 16236.87 126.85 0.00 0.00 7875.86 2165.54 13107.20 00:32:47.195 [2024-11-29 13:16:46.871320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.871335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.879321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.879335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.887320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.887332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.895331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.895348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.903323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.903337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.911320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.911333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.919322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.919335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.927320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.927332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.935319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.935332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.943318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.943332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.951317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.951330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.959319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.959332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.967319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.967330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.975317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.975328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.983316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.983326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.991316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.991326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:46.999318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:46.999329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.195 [2024-11-29 13:16:47.007317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.195 [2024-11-29 13:16:47.007328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.454 [2024-11-29 13:16:47.015330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.454 [2024-11-29 13:16:47.015343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.454 [2024-11-29 13:16:47.023318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:47.454 [2024-11-29 13:16:47.023345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:47.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2210137) - No such process 00:32:47.454 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2210137 00:32:47.454 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:47.454 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.454 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:47.454 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.454 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:47.454 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.454 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:47.454 delay0 00:32:47.454 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.454 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:47.455 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.455 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:47.455 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.455 13:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:47.455 [2024-11-29 13:16:47.108930] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:54.018 Initializing NVMe Controllers 00:32:54.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:54.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:54.018 Initialization complete. Launching workers. 00:32:54.018 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 243, failed: 21969 00:32:54.018 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22087, failed to submit 125 00:32:54.018 success 22008, unsuccessful 79, failed 0 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:54.018 rmmod nvme_tcp 00:32:54.018 rmmod nvme_fabrics 00:32:54.018 rmmod nvme_keyring 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2208498 ']' 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2208498 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2208498 ']' 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2208498 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2208498 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2208498' 00:32:54.018 killing process with pid 2208498 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2208498 00:32:54.018 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2208498 00:32:54.277 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:54.277 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:54.277 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:54.277 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:54.277 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:54.277 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:54.277 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:54.277 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:54.277 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:54.277 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.277 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.277 13:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.182 13:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:56.182 00:32:56.182 real 0m31.208s 00:32:56.182 user 0m40.438s 00:32:56.182 sys 0m12.233s 00:32:56.182 13:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:56.182 13:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:56.182 ************************************ 00:32:56.182 END TEST nvmf_zcopy 00:32:56.182 ************************************ 00:32:56.182 13:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:56.182 13:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:56.182 13:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:56.182 13:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:56.182 ************************************ 00:32:56.182 START TEST nvmf_nmic 00:32:56.182 ************************************ 00:32:56.182 13:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:56.442 * Looking for test storage... 00:32:56.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:56.442 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:56.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.443 --rc genhtml_branch_coverage=1 00:32:56.443 --rc genhtml_function_coverage=1 00:32:56.443 --rc genhtml_legend=1 00:32:56.443 --rc geninfo_all_blocks=1 00:32:56.443 --rc geninfo_unexecuted_blocks=1 00:32:56.443 00:32:56.443 ' 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:56.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.443 --rc genhtml_branch_coverage=1 00:32:56.443 --rc genhtml_function_coverage=1 00:32:56.443 --rc genhtml_legend=1 00:32:56.443 --rc geninfo_all_blocks=1 00:32:56.443 --rc geninfo_unexecuted_blocks=1 00:32:56.443 00:32:56.443 ' 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:56.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.443 --rc genhtml_branch_coverage=1 00:32:56.443 --rc genhtml_function_coverage=1 00:32:56.443 --rc genhtml_legend=1 00:32:56.443 --rc geninfo_all_blocks=1 00:32:56.443 --rc geninfo_unexecuted_blocks=1 00:32:56.443 00:32:56.443 ' 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:56.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.443 --rc genhtml_branch_coverage=1 00:32:56.443 --rc genhtml_function_coverage=1 00:32:56.443 --rc genhtml_legend=1 00:32:56.443 --rc geninfo_all_blocks=1 00:32:56.443 --rc geninfo_unexecuted_blocks=1 00:32:56.443 00:32:56.443 ' 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:56.443 13:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:01.713 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:01.713 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.713 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:01.714 Found net devices under 0000:86:00.0: cvl_0_0 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:01.714 Found net devices under 0000:86:00.1: cvl_0_1 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:01.714 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:01.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:01.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:33:01.973 00:33:01.973 --- 10.0.0.2 ping statistics --- 00:33:01.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.973 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:01.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:01.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:33:01.973 00:33:01.973 --- 10.0.0.1 ping statistics --- 00:33:01.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:01.973 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2215488 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2215488 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2215488 ']' 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:01.973 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:01.973 [2024-11-29 13:17:01.698635] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:01.973 [2024-11-29 13:17:01.699576] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:33:01.973 [2024-11-29 13:17:01.699610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:01.973 [2024-11-29 13:17:01.764247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:02.232 [2024-11-29 13:17:01.809340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:02.232 [2024-11-29 13:17:01.809377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:02.232 [2024-11-29 13:17:01.809384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:02.232 [2024-11-29 13:17:01.809390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:02.232 [2024-11-29 13:17:01.809395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:02.232 [2024-11-29 13:17:01.810953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.232 [2024-11-29 13:17:01.811042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:02.232 [2024-11-29 13:17:01.811132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:02.232 [2024-11-29 13:17:01.811135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.232 [2024-11-29 13:17:01.879072] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:02.232 [2024-11-29 13:17:01.879169] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:02.232 [2024-11-29 13:17:01.879382] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:02.232 [2024-11-29 13:17:01.879654] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:02.232 [2024-11-29 13:17:01.879836] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:02.232 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:02.232 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:02.233 [2024-11-29 13:17:01.951838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:02.233 Malloc0 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.233 13:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:02.233 [2024-11-29 13:17:02.023765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:02.233 test case1: single bdev can't be used in multiple subsystems 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.233 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:02.492 [2024-11-29 13:17:02.055538] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:02.492 [2024-11-29 13:17:02.055563] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:02.492 [2024-11-29 13:17:02.055571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.492 request: 00:33:02.492 { 00:33:02.492 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:02.492 "namespace": { 00:33:02.492 "bdev_name": "Malloc0", 00:33:02.492 "no_auto_visible": false, 00:33:02.492 "hide_metadata": false 00:33:02.492 }, 00:33:02.492 "method": "nvmf_subsystem_add_ns", 00:33:02.492 "req_id": 1 00:33:02.492 } 00:33:02.492 Got JSON-RPC error response 00:33:02.492 response: 00:33:02.492 { 00:33:02.492 "code": -32602, 00:33:02.492 "message": "Invalid parameters" 00:33:02.492 } 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:02.492 Adding namespace failed - expected result. 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:02.492 test case2: host connect to nvmf target in multiple paths 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:02.492 [2024-11-29 13:17:02.067639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:02.492 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:03.060 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:03.060 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:33:03.060 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:03.060 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:03.060 13:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:33:04.963 13:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:04.963 13:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:04.963 13:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:04.963 13:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:04.963 13:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:04.963 13:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:33:04.963 13:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:04.963 [global] 00:33:04.963 thread=1 00:33:04.963 invalidate=1 00:33:04.963 rw=write 00:33:04.963 time_based=1 00:33:04.963 runtime=1 00:33:04.963 ioengine=libaio 00:33:04.963 direct=1 00:33:04.963 bs=4096 00:33:04.963 iodepth=1 00:33:04.963 norandommap=0 00:33:04.963 numjobs=1 00:33:04.963 00:33:04.963 verify_dump=1 00:33:04.963 verify_backlog=512 00:33:04.963 verify_state_save=0 00:33:04.963 do_verify=1 00:33:04.963 verify=crc32c-intel 00:33:04.963 [job0] 00:33:04.963 filename=/dev/nvme0n1 00:33:04.963 Could not set queue depth (nvme0n1) 00:33:05.221 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:05.221 fio-3.35 00:33:05.221 Starting 1 thread 00:33:06.595 00:33:06.595 job0: (groupid=0, jobs=1): err= 0: pid=2216264: Fri Nov 29 13:17:06 2024 00:33:06.595 read: IOPS=22, BW=89.9KiB/s (92.1kB/s)(92.0KiB/1023msec) 00:33:06.595 slat (nsec): min=5711, max=23705, avg=21638.74, stdev=4528.05 00:33:06.595 clat (usec): min=398, max=42003, avg=39235.91, stdev=8469.56 00:33:06.595 lat (usec): min=418, max=42026, avg=39257.55, stdev=8469.94 00:33:06.595 clat percentiles (usec): 00:33:06.595 | 1.00th=[ 400], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:06.595 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:06.595 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:06.595 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:06.595 | 99.99th=[42206] 00:33:06.595 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:33:06.595 slat (usec): min=9, max=27226, avg=64.13, stdev=1202.76 00:33:06.596 clat (usec): min=127, max=1791, avg=167.71, stdev=78.52 00:33:06.596 lat (usec): min=141, max=27473, avg=231.84, stdev=1208.86 00:33:06.596 clat percentiles (usec): 00:33:06.596 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:33:06.596 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 182], 00:33:06.596 | 70.00th=[ 186], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 241], 00:33:06.596 | 99.00th=[ 253], 99.50th=[ 255], 99.90th=[ 1795], 99.95th=[ 1795], 00:33:06.596 | 99.99th=[ 1795] 00:33:06.596 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:06.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:06.596 lat (usec) : 250=93.64%, 500=2.06% 00:33:06.596 lat (msec) : 2=0.19%, 50=4.11% 00:33:06.596 cpu : usr=0.20%, sys=0.59%, ctx=538, majf=0, minf=1 00:33:06.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:06.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:06.596 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:06.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:06.596 00:33:06.596 Run status group 0 (all jobs): 00:33:06.596 READ: bw=89.9KiB/s (92.1kB/s), 89.9KiB/s-89.9KiB/s (92.1kB/s-92.1kB/s), io=92.0KiB (94.2kB), run=1023-1023msec 00:33:06.596 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:33:06.596 00:33:06.596 Disk stats (read/write): 00:33:06.596 nvme0n1: ios=45/512, merge=0/0, ticks=1723/83, in_queue=1806, util=98.50% 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:06.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:06.596 rmmod nvme_tcp 00:33:06.596 rmmod nvme_fabrics 00:33:06.596 rmmod nvme_keyring 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2215488 ']' 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2215488 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2215488 ']' 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2215488 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2215488 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2215488' 00:33:06.596 killing process with pid 2215488 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2215488 00:33:06.596 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2215488 00:33:06.854 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:06.854 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:06.854 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:06.854 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:06.854 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:33:06.854 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:06.854 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:33:06.854 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:06.854 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:06.854 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.854 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.854 13:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.391 00:33:09.391 real 0m12.597s 00:33:09.391 user 0m24.080s 00:33:09.391 sys 0m5.743s 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:09.391 ************************************ 00:33:09.391 END TEST nvmf_nmic 00:33:09.391 ************************************ 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:09.391 ************************************ 00:33:09.391 START TEST nvmf_fio_target 00:33:09.391 ************************************ 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:09.391 * Looking for test storage... 00:33:09.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.391 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:09.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.392 --rc genhtml_branch_coverage=1 00:33:09.392 --rc genhtml_function_coverage=1 00:33:09.392 --rc genhtml_legend=1 00:33:09.392 --rc geninfo_all_blocks=1 00:33:09.392 --rc geninfo_unexecuted_blocks=1 00:33:09.392 00:33:09.392 ' 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:09.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.392 --rc genhtml_branch_coverage=1 00:33:09.392 --rc genhtml_function_coverage=1 00:33:09.392 --rc genhtml_legend=1 00:33:09.392 --rc geninfo_all_blocks=1 00:33:09.392 --rc geninfo_unexecuted_blocks=1 00:33:09.392 00:33:09.392 ' 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:09.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.392 --rc genhtml_branch_coverage=1 00:33:09.392 --rc genhtml_function_coverage=1 00:33:09.392 --rc genhtml_legend=1 00:33:09.392 --rc geninfo_all_blocks=1 00:33:09.392 --rc geninfo_unexecuted_blocks=1 00:33:09.392 00:33:09.392 ' 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:09.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.392 --rc genhtml_branch_coverage=1 00:33:09.392 --rc genhtml_function_coverage=1 00:33:09.392 --rc genhtml_legend=1 00:33:09.392 --rc geninfo_all_blocks=1 00:33:09.392 --rc geninfo_unexecuted_blocks=1 00:33:09.392 00:33:09.392 ' 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:09.392 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:09.393 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.393 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:09.393 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:09.393 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:09.393 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.393 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.393 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.393 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:09.393 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:09.393 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.393 13:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:14.663 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:14.663 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:14.663 Found net devices under 0000:86:00.0: cvl_0_0 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.663 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:14.663 Found net devices under 0000:86:00.1: cvl_0_1 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:14.664 13:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:14.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:33:14.664 00:33:14.664 --- 10.0.0.2 ping statistics --- 00:33:14.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.664 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:14.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:33:14.664 00:33:14.664 --- 10.0.0.1 ping statistics --- 00:33:14.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.664 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2219845 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2219845 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2219845 ']' 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.664 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.664 [2024-11-29 13:17:14.303303] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:14.664 [2024-11-29 13:17:14.304238] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:33:14.664 [2024-11-29 13:17:14.304274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.664 [2024-11-29 13:17:14.372426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:14.664 [2024-11-29 13:17:14.415769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.664 [2024-11-29 13:17:14.415808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.664 [2024-11-29 13:17:14.415816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.664 [2024-11-29 13:17:14.415822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.664 [2024-11-29 13:17:14.415828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.664 [2024-11-29 13:17:14.417401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.664 [2024-11-29 13:17:14.417500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:14.664 [2024-11-29 13:17:14.417566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:14.664 [2024-11-29 13:17:14.417568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.924 [2024-11-29 13:17:14.485866] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:14.924 [2024-11-29 13:17:14.485925] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:14.924 [2024-11-29 13:17:14.486087] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:14.924 [2024-11-29 13:17:14.486369] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:14.924 [2024-11-29 13:17:14.486550] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:14.924 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.924 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:33:14.924 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:14.924 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:14.924 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.924 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.924 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:14.924 [2024-11-29 13:17:14.730084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.182 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:15.182 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:15.182 13:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:15.440 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:15.440 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:15.698 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:15.698 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:15.956 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:15.956 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:16.215 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:16.215 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:16.215 13:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:16.474 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:16.474 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:16.733 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:16.733 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:16.993 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:16.993 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:16.993 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:17.252 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:17.252 13:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:17.511 13:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.770 [2024-11-29 13:17:17.334224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.770 13:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:17.770 13:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:18.029 13:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:18.288 13:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:18.288 13:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:33:18.288 13:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:18.288 13:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:33:18.288 13:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:33:18.288 13:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:33:20.192 13:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:20.192 13:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:20.192 13:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:20.192 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:33:20.193 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:20.193 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:33:20.193 13:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:20.450 [global] 00:33:20.450 thread=1 00:33:20.450 invalidate=1 00:33:20.450 rw=write 00:33:20.450 time_based=1 00:33:20.450 runtime=1 00:33:20.450 ioengine=libaio 00:33:20.450 direct=1 00:33:20.450 bs=4096 00:33:20.450 iodepth=1 00:33:20.450 norandommap=0 00:33:20.450 numjobs=1 00:33:20.450 00:33:20.450 verify_dump=1 00:33:20.450 verify_backlog=512 00:33:20.450 verify_state_save=0 00:33:20.450 do_verify=1 00:33:20.450 verify=crc32c-intel 00:33:20.450 [job0] 00:33:20.450 filename=/dev/nvme0n1 00:33:20.450 [job1] 00:33:20.450 filename=/dev/nvme0n2 00:33:20.450 [job2] 00:33:20.450 filename=/dev/nvme0n3 00:33:20.450 [job3] 00:33:20.450 filename=/dev/nvme0n4 00:33:20.450 Could not set queue depth (nvme0n1) 00:33:20.450 Could not set queue depth (nvme0n2) 00:33:20.450 Could not set queue depth (nvme0n3) 00:33:20.450 Could not set queue depth (nvme0n4) 00:33:20.707 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:20.707 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:20.707 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:20.707 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:20.707 fio-3.35 00:33:20.707 Starting 4 threads 00:33:21.793 00:33:21.793 job0: (groupid=0, jobs=1): err= 0: pid=2220955: Fri Nov 29 13:17:21 2024 00:33:21.793 read: IOPS=581, BW=2325KiB/s (2381kB/s)(2360KiB/1015msec) 00:33:21.793 slat (nsec): min=7294, max=25316, avg=8889.11, stdev=2846.49 00:33:21.793 clat (usec): min=213, max=41031, avg=1356.10, stdev=6619.26 00:33:21.793 lat (usec): min=221, max=41055, avg=1364.99, stdev=6621.81 00:33:21.793 clat percentiles (usec): 00:33:21.793 | 1.00th=[ 231], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 241], 00:33:21.793 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:33:21.793 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 318], 00:33:21.793 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:21.793 | 99.99th=[41157] 00:33:21.793 write: IOPS=1008, BW=4035KiB/s (4132kB/s)(4096KiB/1015msec); 0 zone resets 00:33:21.793 slat (nsec): min=10869, max=43968, avg=12278.82, stdev=1840.87 00:33:21.793 clat (usec): min=137, max=296, avg=186.54, stdev=33.74 00:33:21.793 lat (usec): min=149, max=337, avg=198.82, stdev=34.09 00:33:21.793 clat percentiles (usec): 00:33:21.793 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:33:21.793 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:33:21.793 | 70.00th=[ 194], 80.00th=[ 237], 90.00th=[ 241], 95.00th=[ 243], 00:33:21.793 | 99.00th=[ 247], 99.50th=[ 260], 99.90th=[ 293], 99.95th=[ 297], 00:33:21.793 | 99.99th=[ 297] 00:33:21.793 bw ( KiB/s): min= 8192, max= 8192, per=37.68%, avg=8192.00, stdev= 0.00, samples=1 00:33:21.793 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:21.793 lat (usec) : 250=87.42%, 500=11.52%, 750=0.06% 00:33:21.793 lat (msec) : 50=0.99% 00:33:21.793 cpu : usr=1.38%, sys=2.56%, ctx=1617, majf=0, minf=1 00:33:21.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:21.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.793 issued rwts: total=590,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:21.793 job1: (groupid=0, jobs=1): err= 0: pid=2220959: Fri Nov 29 13:17:21 2024 00:33:21.793 read: IOPS=21, BW=86.5KiB/s (88.6kB/s)(88.0KiB/1017msec) 00:33:21.793 slat (nsec): min=9807, max=26530, avg=23525.00, stdev=3181.74 00:33:21.793 clat (usec): min=40785, max=41978, avg=41029.86, stdev=238.95 00:33:21.793 lat (usec): min=40810, max=42002, avg=41053.39, stdev=237.80 00:33:21.793 clat percentiles (usec): 00:33:21.793 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:21.793 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:21.793 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:21.793 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:21.793 | 99.99th=[42206] 00:33:21.793 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:33:21.793 slat (nsec): min=10902, max=43845, avg=12829.45, stdev=2428.97 00:33:21.793 clat (usec): min=149, max=397, avg=201.61, stdev=19.37 00:33:21.793 lat (usec): min=160, max=409, avg=214.44, stdev=19.84 00:33:21.793 clat percentiles (usec): 00:33:21.793 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:33:21.793 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:33:21.793 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 231], 00:33:21.793 | 99.00th=[ 245], 99.50th=[ 285], 99.90th=[ 400], 99.95th=[ 400], 00:33:21.793 | 99.99th=[ 400] 00:33:21.793 bw ( KiB/s): min= 4096, max= 4096, per=18.84%, avg=4096.00, stdev= 0.00, samples=1 00:33:21.793 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:21.793 lat (usec) : 250=94.94%, 500=0.94% 00:33:21.793 lat (msec) : 50=4.12% 00:33:21.793 cpu : usr=0.30%, sys=1.08%, ctx=536, majf=0, minf=1 00:33:21.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:21.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.793 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:21.793 job2: (groupid=0, jobs=1): err= 0: pid=2220965: Fri Nov 29 13:17:21 2024 00:33:21.793 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:33:21.793 slat (nsec): min=7579, max=25653, avg=8816.58, stdev=1297.82 00:33:21.793 clat (usec): min=213, max=41019, avg=401.34, stdev=2536.85 00:33:21.793 lat (usec): min=221, max=41041, avg=410.16, stdev=2537.79 00:33:21.793 clat percentiles (usec): 00:33:21.793 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 229], 20.00th=[ 233], 00:33:21.793 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 241], 00:33:21.793 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 260], 00:33:21.793 | 99.00th=[ 383], 99.50th=[ 963], 99.90th=[41157], 99.95th=[41157], 00:33:21.793 | 99.99th=[41157] 00:33:21.793 write: IOPS=1941, BW=7764KiB/s (7951kB/s)(7772KiB/1001msec); 0 zone resets 00:33:21.793 slat (nsec): min=10820, max=43560, avg=12430.64, stdev=1804.12 00:33:21.793 clat (usec): min=140, max=359, avg=171.64, stdev=14.41 00:33:21.793 lat (usec): min=157, max=371, avg=184.07, stdev=14.70 00:33:21.793 clat percentiles (usec): 00:33:21.793 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:33:21.793 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:33:21.793 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 194], 00:33:21.793 | 99.00th=[ 210], 99.50th=[ 217], 99.90th=[ 289], 99.95th=[ 359], 00:33:21.793 | 99.99th=[ 359] 00:33:21.793 bw ( KiB/s): min= 4816, max= 4816, per=22.15%, avg=4816.00, stdev= 0.00, samples=1 00:33:21.793 iops : min= 1204, max= 1204, avg=1204.00, stdev= 0.00, samples=1 00:33:21.793 lat (usec) : 250=94.25%, 500=5.43%, 750=0.06%, 1000=0.06% 00:33:21.793 lat (msec) : 2=0.03%, 50=0.17% 00:33:21.793 cpu : usr=3.00%, sys=5.80%, ctx=3480, majf=0, minf=1 00:33:21.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:21.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.793 issued rwts: total=1536,1943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:21.793 job3: (groupid=0, jobs=1): err= 0: pid=2220968: Fri Nov 29 13:17:21 2024 00:33:21.793 read: IOPS=1786, BW=7145KiB/s (7316kB/s)(7152KiB/1001msec) 00:33:21.793 slat (nsec): min=7588, max=40312, avg=8805.15, stdev=1500.52 00:33:21.793 clat (usec): min=231, max=1246, avg=292.63, stdev=51.66 00:33:21.793 lat (usec): min=239, max=1258, avg=301.43, stdev=51.74 00:33:21.793 clat percentiles (usec): 00:33:21.793 | 1.00th=[ 247], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 269], 00:33:21.793 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:33:21.793 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 343], 00:33:21.793 | 99.00th=[ 478], 99.50th=[ 494], 99.90th=[ 1090], 99.95th=[ 1254], 00:33:21.793 | 99.99th=[ 1254] 00:33:21.793 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:33:21.793 slat (nsec): min=11155, max=45180, avg=12526.95, stdev=1669.56 00:33:21.793 clat (usec): min=146, max=911, avg=206.09, stdev=40.76 00:33:21.793 lat (usec): min=158, max=926, avg=218.62, stdev=40.98 00:33:21.793 clat percentiles (usec): 00:33:21.793 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:33:21.793 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:33:21.793 | 70.00th=[ 208], 80.00th=[ 227], 90.00th=[ 285], 95.00th=[ 293], 00:33:21.793 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 375], 99.95th=[ 400], 00:33:21.793 | 99.99th=[ 914] 00:33:21.793 bw ( KiB/s): min= 8192, max= 8192, per=37.68%, avg=8192.00, stdev= 0.00, samples=1 00:33:21.793 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:21.793 lat (usec) : 250=47.11%, 500=52.71%, 750=0.08%, 1000=0.05% 00:33:21.793 lat (msec) : 2=0.05% 00:33:21.793 cpu : usr=3.20%, sys=6.40%, ctx=3838, majf=0, minf=1 00:33:21.793 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:21.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:21.793 issued rwts: total=1788,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:21.793 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:21.793 00:33:21.793 Run status group 0 (all jobs): 00:33:21.793 READ: bw=15.1MiB/s (15.9MB/s), 86.5KiB/s-7145KiB/s (88.6kB/s-7316kB/s), io=15.4MiB (16.1MB), run=1001-1017msec 00:33:21.793 WRITE: bw=21.2MiB/s (22.3MB/s), 2014KiB/s-8184KiB/s (2062kB/s-8380kB/s), io=21.6MiB (22.6MB), run=1001-1017msec 00:33:21.793 00:33:21.793 Disk stats (read/write): 00:33:21.793 nvme0n1: ios=611/1024, merge=0/0, ticks=1492/183, in_queue=1675, util=85.87% 00:33:21.793 nvme0n2: ios=67/512, merge=0/0, ticks=1215/98, in_queue=1313, util=89.63% 00:33:21.793 nvme0n3: ios=1292/1536, merge=0/0, ticks=1065/239, in_queue=1304, util=93.33% 00:33:21.793 nvme0n4: ios=1593/1739, merge=0/0, ticks=954/351, in_queue=1305, util=94.22% 00:33:21.794 13:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:21.794 [global] 00:33:21.794 thread=1 00:33:21.794 invalidate=1 00:33:21.794 rw=randwrite 00:33:21.794 time_based=1 00:33:21.794 runtime=1 00:33:21.794 ioengine=libaio 00:33:21.794 direct=1 00:33:21.794 bs=4096 00:33:21.794 iodepth=1 00:33:21.794 norandommap=0 00:33:21.794 numjobs=1 00:33:21.794 00:33:21.794 verify_dump=1 00:33:21.794 verify_backlog=512 00:33:21.794 verify_state_save=0 00:33:21.794 do_verify=1 00:33:21.794 verify=crc32c-intel 00:33:22.096 [job0] 00:33:22.096 filename=/dev/nvme0n1 00:33:22.096 [job1] 00:33:22.096 filename=/dev/nvme0n2 00:33:22.096 [job2] 00:33:22.096 filename=/dev/nvme0n3 00:33:22.096 [job3] 00:33:22.096 filename=/dev/nvme0n4 00:33:22.096 Could not set queue depth (nvme0n1) 00:33:22.096 Could not set queue depth (nvme0n2) 00:33:22.096 Could not set queue depth (nvme0n3) 00:33:22.096 Could not set queue depth (nvme0n4) 00:33:22.361 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:22.361 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:22.361 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:22.361 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:22.361 fio-3.35 00:33:22.361 Starting 4 threads 00:33:23.304 00:33:23.304 job0: (groupid=0, jobs=1): err= 0: pid=2221349: Fri Nov 29 13:17:23 2024 00:33:23.304 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:33:23.304 slat (nsec): min=7259, max=22514, avg=8270.07, stdev=944.19 00:33:23.304 clat (usec): min=212, max=287, avg=243.00, stdev= 9.19 00:33:23.304 lat (usec): min=220, max=297, avg=251.27, stdev= 9.29 00:33:23.304 clat percentiles (usec): 00:33:23.304 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 237], 00:33:23.304 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:33:23.304 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 260], 00:33:23.304 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 289], 00:33:23.304 | 99.99th=[ 289] 00:33:23.304 write: IOPS=2485, BW=9942KiB/s (10.2MB/s)(9952KiB/1001msec); 0 zone resets 00:33:23.304 slat (usec): min=10, max=16860, avg=19.01, stdev=337.79 00:33:23.304 clat (usec): min=126, max=364, avg=170.58, stdev=19.54 00:33:23.304 lat (usec): min=150, max=17072, avg=189.59, stdev=339.21 00:33:23.304 clat percentiles (usec): 00:33:23.304 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:33:23.304 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:33:23.304 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 194], 95.00th=[ 208], 00:33:23.304 | 99.00th=[ 243], 99.50th=[ 253], 99.90th=[ 265], 99.95th=[ 343], 00:33:23.304 | 99.99th=[ 367] 00:33:23.304 bw ( KiB/s): min= 9872, max= 9872, per=45.23%, avg=9872.00, stdev= 0.00, samples=1 00:33:23.304 iops : min= 2468, max= 2468, avg=2468.00, stdev= 0.00, samples=1 00:33:23.304 lat (usec) : 250=90.98%, 500=9.02% 00:33:23.304 cpu : usr=5.10%, sys=5.90%, ctx=4540, majf=0, minf=1 00:33:23.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.304 issued rwts: total=2048,2488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:23.304 job1: (groupid=0, jobs=1): err= 0: pid=2221359: Fri Nov 29 13:17:23 2024 00:33:23.304 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:33:23.304 slat (nsec): min=8993, max=12179, avg=10044.59, stdev=819.54 00:33:23.304 clat (usec): min=40894, max=41229, avg=41000.16, stdev=61.47 00:33:23.304 lat (usec): min=40905, max=41239, avg=41010.20, stdev=61.35 00:33:23.304 clat percentiles (usec): 00:33:23.304 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:23.304 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:23.304 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:23.304 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:23.304 | 99.99th=[41157] 00:33:23.304 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:33:23.304 slat (nsec): min=9985, max=41566, avg=11667.04, stdev=2282.03 00:33:23.304 clat (usec): min=162, max=268, avg=196.65, stdev=15.86 00:33:23.304 lat (usec): min=174, max=299, avg=208.32, stdev=16.28 00:33:23.304 clat percentiles (usec): 00:33:23.304 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:33:23.304 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:33:23.304 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 217], 95.00th=[ 223], 00:33:23.304 | 99.00th=[ 237], 99.50th=[ 247], 99.90th=[ 269], 99.95th=[ 269], 00:33:23.304 | 99.99th=[ 269] 00:33:23.304 bw ( KiB/s): min= 4096, max= 4096, per=18.77%, avg=4096.00, stdev= 0.00, samples=1 00:33:23.304 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:23.304 lat (usec) : 250=95.51%, 500=0.37% 00:33:23.304 lat (msec) : 50=4.12% 00:33:23.304 cpu : usr=0.40%, sys=0.89%, ctx=534, majf=0, minf=2 00:33:23.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.304 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:23.304 job2: (groupid=0, jobs=1): err= 0: pid=2221376: Fri Nov 29 13:17:23 2024 00:33:23.304 read: IOPS=21, BW=86.4KiB/s (88.4kB/s)(88.0KiB/1019msec) 00:33:23.304 slat (nsec): min=10405, max=22890, avg=20648.18, stdev=3879.32 00:33:23.304 clat (usec): min=40825, max=41958, avg=41082.24, stdev=295.07 00:33:23.304 lat (usec): min=40848, max=41980, avg=41102.88, stdev=292.83 00:33:23.304 clat percentiles (usec): 00:33:23.304 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:23.304 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:23.304 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:33:23.304 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:23.304 | 99.99th=[42206] 00:33:23.304 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:33:23.304 slat (nsec): min=9956, max=37374, avg=11244.29, stdev=1749.42 00:33:23.304 clat (usec): min=155, max=301, avg=207.21, stdev=22.79 00:33:23.304 lat (usec): min=166, max=320, avg=218.46, stdev=23.30 00:33:23.304 clat percentiles (usec): 00:33:23.304 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 188], 00:33:23.304 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:33:23.304 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 243], 00:33:23.304 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 302], 99.95th=[ 302], 00:33:23.304 | 99.99th=[ 302] 00:33:23.304 bw ( KiB/s): min= 4096, max= 4096, per=18.77%, avg=4096.00, stdev= 0.00, samples=1 00:33:23.304 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:23.304 lat (usec) : 250=92.51%, 500=3.37% 00:33:23.304 lat (msec) : 50=4.12% 00:33:23.304 cpu : usr=0.20%, sys=0.59%, ctx=536, majf=0, minf=1 00:33:23.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.304 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:23.304 job3: (groupid=0, jobs=1): err= 0: pid=2221381: Fri Nov 29 13:17:23 2024 00:33:23.304 read: IOPS=1929, BW=7716KiB/s (7901kB/s)(7724KiB/1001msec) 00:33:23.304 slat (nsec): min=8446, max=40888, avg=9678.27, stdev=1666.42 00:33:23.304 clat (usec): min=230, max=504, avg=275.43, stdev=46.01 00:33:23.304 lat (usec): min=239, max=515, avg=285.11, stdev=46.08 00:33:23.304 clat percentiles (usec): 00:33:23.304 | 1.00th=[ 239], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 253], 00:33:23.304 | 30.00th=[ 258], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:33:23.304 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 437], 00:33:23.305 | 99.00th=[ 461], 99.50th=[ 465], 99.90th=[ 482], 99.95th=[ 506], 00:33:23.305 | 99.99th=[ 506] 00:33:23.305 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:33:23.305 slat (nsec): min=10245, max=41340, avg=12525.05, stdev=1876.76 00:33:23.305 clat (usec): min=145, max=322, avg=200.69, stdev=36.63 00:33:23.305 lat (usec): min=156, max=335, avg=213.21, stdev=36.85 00:33:23.305 clat percentiles (usec): 00:33:23.305 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 174], 00:33:23.305 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 196], 00:33:23.305 | 70.00th=[ 217], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 281], 00:33:23.305 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 310], 99.95th=[ 314], 00:33:23.305 | 99.99th=[ 322] 00:33:23.305 bw ( KiB/s): min= 8192, max= 8192, per=37.53%, avg=8192.00, stdev= 0.00, samples=1 00:33:23.305 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:23.305 lat (usec) : 250=52.53%, 500=47.45%, 750=0.03% 00:33:23.305 cpu : usr=4.20%, sys=6.20%, ctx=3980, majf=0, minf=2 00:33:23.305 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.305 issued rwts: total=1931,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.305 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:23.305 00:33:23.305 Run status group 0 (all jobs): 00:33:23.305 READ: bw=15.4MiB/s (16.2MB/s), 86.4KiB/s-8184KiB/s (88.4kB/s-8380kB/s), io=15.7MiB (16.5MB), run=1001-1019msec 00:33:23.305 WRITE: bw=21.3MiB/s (22.3MB/s), 2010KiB/s-9942KiB/s (2058kB/s-10.2MB/s), io=21.7MiB (22.8MB), run=1001-1019msec 00:33:23.305 00:33:23.305 Disk stats (read/write): 00:33:23.305 nvme0n1: ios=1800/2048, merge=0/0, ticks=1415/331, in_queue=1746, util=98.80% 00:33:23.305 nvme0n2: ios=18/512, merge=0/0, ticks=739/95, in_queue=834, util=86.69% 00:33:23.305 nvme0n3: ios=41/512, merge=0/0, ticks=1685/103, in_queue=1788, util=99.17% 00:33:23.305 nvme0n4: ios=1536/1877, merge=0/0, ticks=386/355, in_queue=741, util=89.70% 00:33:23.562 13:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:23.562 [global] 00:33:23.562 thread=1 00:33:23.562 invalidate=1 00:33:23.562 rw=write 00:33:23.562 time_based=1 00:33:23.562 runtime=1 00:33:23.562 ioengine=libaio 00:33:23.562 direct=1 00:33:23.562 bs=4096 00:33:23.562 iodepth=128 00:33:23.562 norandommap=0 00:33:23.562 numjobs=1 00:33:23.562 00:33:23.562 verify_dump=1 00:33:23.562 verify_backlog=512 00:33:23.562 verify_state_save=0 00:33:23.562 do_verify=1 00:33:23.562 verify=crc32c-intel 00:33:23.562 [job0] 00:33:23.562 filename=/dev/nvme0n1 00:33:23.562 [job1] 00:33:23.562 filename=/dev/nvme0n2 00:33:23.562 [job2] 00:33:23.562 filename=/dev/nvme0n3 00:33:23.562 [job3] 00:33:23.562 filename=/dev/nvme0n4 00:33:23.562 Could not set queue depth (nvme0n1) 00:33:23.562 Could not set queue depth (nvme0n2) 00:33:23.562 Could not set queue depth (nvme0n3) 00:33:23.562 Could not set queue depth (nvme0n4) 00:33:23.819 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:23.819 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:23.819 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:23.819 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:23.819 fio-3.35 00:33:23.819 Starting 4 threads 00:33:25.188 00:33:25.188 job0: (groupid=0, jobs=1): err= 0: pid=2221779: Fri Nov 29 13:17:24 2024 00:33:25.188 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:33:25.188 slat (nsec): min=1122, max=9144.4k, avg=93359.64, stdev=512653.67 00:33:25.188 clat (usec): min=7743, max=21683, avg=12015.68, stdev=1846.95 00:33:25.188 lat (usec): min=7750, max=21696, avg=12109.04, stdev=1875.94 00:33:25.188 clat percentiles (usec): 00:33:25.188 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10683], 00:33:25.188 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:33:25.188 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14222], 95.00th=[15139], 00:33:25.188 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19792], 99.95th=[19792], 00:33:25.188 | 99.99th=[21627] 00:33:25.188 write: IOPS=5314, BW=20.8MiB/s (21.8MB/s)(20.8MiB/1002msec); 0 zone resets 00:33:25.188 slat (usec): min=2, max=8179, avg=93.22, stdev=495.40 00:33:25.188 clat (usec): min=608, max=24689, avg=12251.77, stdev=2424.13 00:33:25.188 lat (usec): min=3959, max=24704, avg=12344.99, stdev=2457.78 00:33:25.188 clat percentiles (usec): 00:33:25.188 | 1.00th=[ 7046], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10814], 00:33:25.188 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:33:25.188 | 70.00th=[12387], 80.00th=[13042], 90.00th=[14746], 95.00th=[16581], 00:33:25.188 | 99.00th=[21627], 99.50th=[21890], 99.90th=[21890], 99.95th=[21890], 00:33:25.188 | 99.99th=[24773] 00:33:25.188 bw ( KiB/s): min=20480, max=21096, per=26.02%, avg=20788.00, stdev=435.58, samples=2 00:33:25.188 iops : min= 5120, max= 5274, avg=5197.00, stdev=108.89, samples=2 00:33:25.188 lat (usec) : 750=0.01% 00:33:25.188 lat (msec) : 4=0.02%, 10=8.22%, 20=90.37%, 50=1.38% 00:33:25.188 cpu : usr=3.90%, sys=5.19%, ctx=505, majf=0, minf=1 00:33:25.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:25.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:25.188 issued rwts: total=5120,5325,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:25.188 job1: (groupid=0, jobs=1): err= 0: pid=2221793: Fri Nov 29 13:17:24 2024 00:33:25.188 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:33:25.188 slat (nsec): min=1106, max=18147k, avg=100167.54, stdev=736353.84 00:33:25.188 clat (usec): min=2871, max=48545, avg=13286.01, stdev=4714.40 00:33:25.188 lat (usec): min=2876, max=48551, avg=13386.18, stdev=4749.50 00:33:25.188 clat percentiles (usec): 00:33:25.188 | 1.00th=[ 5080], 5.00th=[ 7308], 10.00th=[ 9372], 20.00th=[10290], 00:33:25.188 | 30.00th=[10945], 40.00th=[11731], 50.00th=[12387], 60.00th=[13173], 00:33:25.188 | 70.00th=[14091], 80.00th=[15533], 90.00th=[19268], 95.00th=[21103], 00:33:25.188 | 99.00th=[30802], 99.50th=[33817], 99.90th=[43254], 99.95th=[43254], 00:33:25.188 | 99.99th=[48497] 00:33:25.188 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:33:25.188 slat (nsec): min=1834, max=23047k, avg=94142.25, stdev=762297.52 00:33:25.188 clat (usec): min=1116, max=57729, avg=14316.27, stdev=9007.39 00:33:25.188 lat (usec): min=1125, max=57773, avg=14410.41, stdev=9068.82 00:33:25.188 clat percentiles (usec): 00:33:25.188 | 1.00th=[ 1926], 5.00th=[ 6915], 10.00th=[ 8848], 20.00th=[10290], 00:33:25.188 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:33:25.188 | 70.00th=[12256], 80.00th=[15139], 90.00th=[23987], 95.00th=[36439], 00:33:25.188 | 99.00th=[56886], 99.50th=[57410], 99.90th=[57410], 99.95th=[57934], 00:33:25.188 | 99.99th=[57934] 00:33:25.188 bw ( KiB/s): min=18112, max=18752, per=23.07%, avg=18432.00, stdev=452.55, samples=2 00:33:25.188 iops : min= 4528, max= 4688, avg=4608.00, stdev=113.14, samples=2 00:33:25.189 lat (msec) : 2=0.60%, 4=0.74%, 10=15.82%, 20=72.14%, 50=10.11% 00:33:25.189 lat (msec) : 100=0.60% 00:33:25.189 cpu : usr=2.20%, sys=5.29%, ctx=359, majf=0, minf=2 00:33:25.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:33:25.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:25.189 issued rwts: total=4608,4612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:25.189 job2: (groupid=0, jobs=1): err= 0: pid=2221810: Fri Nov 29 13:17:24 2024 00:33:25.189 read: IOPS=4734, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1003msec) 00:33:25.189 slat (nsec): min=1766, max=12325k, avg=101806.73, stdev=779728.88 00:33:25.189 clat (usec): min=544, max=25193, avg=13137.88, stdev=2881.05 00:33:25.189 lat (usec): min=4641, max=31667, avg=13239.69, stdev=2944.79 00:33:25.189 clat percentiles (usec): 00:33:25.189 | 1.00th=[ 8586], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10814], 00:33:25.189 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12780], 60.00th=[13304], 00:33:25.189 | 70.00th=[13698], 80.00th=[15008], 90.00th=[16909], 95.00th=[19006], 00:33:25.189 | 99.00th=[21627], 99.50th=[23987], 99.90th=[24773], 99.95th=[24773], 00:33:25.189 | 99.99th=[25297] 00:33:25.189 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:33:25.189 slat (usec): min=2, max=10692, avg=93.26, stdev=690.75 00:33:25.189 clat (usec): min=446, max=35158, avg=12619.08, stdev=4143.14 00:33:25.189 lat (usec): min=846, max=35168, avg=12712.35, stdev=4194.81 00:33:25.189 clat percentiles (usec): 00:33:25.189 | 1.00th=[ 5473], 5.00th=[ 7504], 10.00th=[ 8356], 20.00th=[10683], 00:33:25.189 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12649], 60.00th=[12911], 00:33:25.189 | 70.00th=[13304], 80.00th=[13566], 90.00th=[16057], 95.00th=[19268], 00:33:25.189 | 99.00th=[32113], 99.50th=[34866], 99.90th=[34866], 99.95th=[35390], 00:33:25.189 | 99.99th=[35390] 00:33:25.189 bw ( KiB/s): min=20480, max=20480, per=25.63%, avg=20480.00, stdev= 0.00, samples=2 00:33:25.189 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:33:25.189 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.07% 00:33:25.189 lat (msec) : 2=0.09%, 4=0.08%, 10=10.64%, 20=85.66%, 50=3.43% 00:33:25.189 cpu : usr=4.59%, sys=6.89%, ctx=273, majf=0, minf=1 00:33:25.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:25.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:25.189 issued rwts: total=4749,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:25.189 job3: (groupid=0, jobs=1): err= 0: pid=2221815: Fri Nov 29 13:17:24 2024 00:33:25.189 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:33:25.189 slat (nsec): min=1477, max=14796k, avg=112888.20, stdev=897019.10 00:33:25.189 clat (usec): min=1349, max=33651, avg=14210.52, stdev=4148.01 00:33:25.189 lat (usec): min=1357, max=34804, avg=14323.41, stdev=4205.17 00:33:25.189 clat percentiles (usec): 00:33:25.189 | 1.00th=[ 5997], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11469], 00:33:25.189 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12911], 60.00th=[13566], 00:33:25.189 | 70.00th=[14484], 80.00th=[16909], 90.00th=[20055], 95.00th=[21627], 00:33:25.189 | 99.00th=[28705], 99.50th=[31327], 99.90th=[33817], 99.95th=[33817], 00:33:25.189 | 99.99th=[33817] 00:33:25.189 write: IOPS=5037, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1008msec); 0 zone resets 00:33:25.189 slat (usec): min=2, max=10783, avg=85.21, stdev=571.40 00:33:25.189 clat (usec): min=401, max=33634, avg=12244.74, stdev=3582.90 00:33:25.189 lat (usec): min=414, max=33638, avg=12329.94, stdev=3618.01 00:33:25.189 clat percentiles (usec): 00:33:25.189 | 1.00th=[ 1532], 5.00th=[ 6456], 10.00th=[ 8160], 20.00th=[ 9503], 00:33:25.189 | 30.00th=[11338], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:33:25.189 | 70.00th=[13304], 80.00th=[13435], 90.00th=[16909], 95.00th=[18220], 00:33:25.189 | 99.00th=[23725], 99.50th=[23987], 99.90th=[24249], 99.95th=[24249], 00:33:25.189 | 99.99th=[33817] 00:33:25.189 bw ( KiB/s): min=19144, max=20464, per=24.79%, avg=19804.00, stdev=933.38, samples=2 00:33:25.189 iops : min= 4786, max= 5116, avg=4951.00, stdev=233.35, samples=2 00:33:25.189 lat (usec) : 500=0.02% 00:33:25.189 lat (msec) : 2=0.82%, 4=0.81%, 10=12.75%, 20=79.09%, 50=6.51% 00:33:25.189 cpu : usr=3.48%, sys=5.96%, ctx=447, majf=0, minf=1 00:33:25.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:33:25.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:25.189 issued rwts: total=4608,5078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:25.189 00:33:25.189 Run status group 0 (all jobs): 00:33:25.189 READ: bw=74.0MiB/s (77.6MB/s), 17.9MiB/s-20.0MiB/s (18.7MB/s-20.9MB/s), io=74.6MiB (78.2MB), run=1002-1008msec 00:33:25.189 WRITE: bw=78.0MiB/s (81.8MB/s), 18.0MiB/s-20.8MiB/s (18.8MB/s-21.8MB/s), io=78.7MiB (82.5MB), run=1002-1008msec 00:33:25.189 00:33:25.189 Disk stats (read/write): 00:33:25.189 nvme0n1: ios=4355/4608, merge=0/0, ticks=17204/17471, in_queue=34675, util=98.20% 00:33:25.189 nvme0n2: ios=3584/4061, merge=0/0, ticks=27734/26473, in_queue=54207, util=86.70% 00:33:25.189 nvme0n3: ios=4139/4271, merge=0/0, ticks=46200/46163, in_queue=92363, util=96.67% 00:33:25.189 nvme0n4: ios=3969/4096, merge=0/0, ticks=56156/49020, in_queue=105176, util=99.58% 00:33:25.189 13:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:25.189 [global] 00:33:25.189 thread=1 00:33:25.189 invalidate=1 00:33:25.189 rw=randwrite 00:33:25.189 time_based=1 00:33:25.189 runtime=1 00:33:25.189 ioengine=libaio 00:33:25.189 direct=1 00:33:25.189 bs=4096 00:33:25.189 iodepth=128 00:33:25.189 norandommap=0 00:33:25.189 numjobs=1 00:33:25.189 00:33:25.189 verify_dump=1 00:33:25.189 verify_backlog=512 00:33:25.189 verify_state_save=0 00:33:25.189 do_verify=1 00:33:25.189 verify=crc32c-intel 00:33:25.189 [job0] 00:33:25.189 filename=/dev/nvme0n1 00:33:25.189 [job1] 00:33:25.189 filename=/dev/nvme0n2 00:33:25.189 [job2] 00:33:25.189 filename=/dev/nvme0n3 00:33:25.189 [job3] 00:33:25.189 filename=/dev/nvme0n4 00:33:25.189 Could not set queue depth (nvme0n1) 00:33:25.189 Could not set queue depth (nvme0n2) 00:33:25.189 Could not set queue depth (nvme0n3) 00:33:25.189 Could not set queue depth (nvme0n4) 00:33:25.449 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:25.449 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:25.449 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:25.449 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:25.449 fio-3.35 00:33:25.449 Starting 4 threads 00:33:26.848 00:33:26.848 job0: (groupid=0, jobs=1): err= 0: pid=2222200: Fri Nov 29 13:17:26 2024 00:33:26.848 read: IOPS=3545, BW=13.9MiB/s (14.5MB/s)(14.5MiB/1044msec) 00:33:26.848 slat (nsec): min=1132, max=14236k, avg=132086.19, stdev=765143.06 00:33:26.848 clat (usec): min=6062, max=56097, avg=18303.26, stdev=7636.61 00:33:26.848 lat (usec): min=6070, max=56101, avg=18435.35, stdev=7655.62 00:33:26.848 clat percentiles (usec): 00:33:26.848 | 1.00th=[ 8291], 5.00th=[10552], 10.00th=[11338], 20.00th=[12387], 00:33:26.848 | 30.00th=[13435], 40.00th=[15401], 50.00th=[17171], 60.00th=[19006], 00:33:26.848 | 70.00th=[19792], 80.00th=[22152], 90.00th=[25035], 95.00th=[30278], 00:33:26.848 | 99.00th=[53216], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:33:26.848 | 99.99th=[55837] 00:33:26.848 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:33:26.848 slat (nsec): min=1964, max=13384k, avg=118086.82, stdev=752759.57 00:33:26.848 clat (usec): min=1228, max=56672, avg=15785.17, stdev=5387.66 00:33:26.848 lat (usec): min=1239, max=56676, avg=15903.26, stdev=5427.57 00:33:26.848 clat percentiles (usec): 00:33:26.848 | 1.00th=[ 6783], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[11076], 00:33:26.848 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14877], 60.00th=[16188], 00:33:26.848 | 70.00th=[17433], 80.00th=[19268], 90.00th=[22414], 95.00th=[26084], 00:33:26.848 | 99.00th=[30540], 99.50th=[33162], 99.90th=[56886], 99.95th=[56886], 00:33:26.848 | 99.99th=[56886] 00:33:26.848 bw ( KiB/s): min=16304, max=16384, per=23.81%, avg=16344.00, stdev=56.57, samples=2 00:33:26.848 iops : min= 4076, max= 4096, avg=4086.00, stdev=14.14, samples=2 00:33:26.848 lat (msec) : 2=0.12%, 10=8.98%, 20=69.15%, 50=21.06%, 100=0.71% 00:33:26.848 cpu : usr=2.97%, sys=5.37%, ctx=294, majf=0, minf=1 00:33:26.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:26.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:26.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:26.848 issued rwts: total=3702,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:26.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:26.848 job1: (groupid=0, jobs=1): err= 0: pid=2222223: Fri Nov 29 13:17:26 2024 00:33:26.848 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:33:26.848 slat (nsec): min=1053, max=15333k, avg=83712.12, stdev=653456.83 00:33:26.848 clat (usec): min=3256, max=34113, avg=12200.62, stdev=5150.56 00:33:26.848 lat (usec): min=3259, max=34135, avg=12284.34, stdev=5195.31 00:33:26.848 clat percentiles (usec): 00:33:26.848 | 1.00th=[ 3556], 5.00th=[ 6390], 10.00th=[ 7177], 20.00th=[ 8586], 00:33:26.848 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10421], 60.00th=[10945], 00:33:26.848 | 70.00th=[13042], 80.00th=[15401], 90.00th=[20579], 95.00th=[23200], 00:33:26.848 | 99.00th=[30016], 99.50th=[30016], 99.90th=[31065], 99.95th=[32375], 00:33:26.848 | 99.99th=[34341] 00:33:26.848 write: IOPS=5426, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1003msec); 0 zone resets 00:33:26.848 slat (nsec): min=1810, max=13539k, avg=91132.59, stdev=576513.65 00:33:26.848 clat (usec): min=340, max=30616, avg=11893.74, stdev=5312.90 00:33:26.849 lat (usec): min=939, max=30624, avg=11984.87, stdev=5353.60 00:33:26.849 clat percentiles (usec): 00:33:26.849 | 1.00th=[ 2540], 5.00th=[ 4080], 10.00th=[ 6063], 20.00th=[ 8160], 00:33:26.849 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[11338], 00:33:26.849 | 70.00th=[14091], 80.00th=[16581], 90.00th=[19268], 95.00th=[22152], 00:33:26.849 | 99.00th=[27395], 99.50th=[29230], 99.90th=[29754], 99.95th=[30278], 00:33:26.849 | 99.99th=[30540] 00:33:26.849 bw ( KiB/s): min=20480, max=22040, per=30.97%, avg=21260.00, stdev=1103.09, samples=2 00:33:26.849 iops : min= 5120, max= 5510, avg=5315.00, stdev=275.77, samples=2 00:33:26.849 lat (usec) : 500=0.01%, 1000=0.12% 00:33:26.849 lat (msec) : 2=0.30%, 4=2.59%, 10=37.21%, 20=49.44%, 50=10.32% 00:33:26.849 cpu : usr=2.89%, sys=5.49%, ctx=459, majf=0, minf=1 00:33:26.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:26.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:26.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:26.849 issued rwts: total=5120,5443,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:26.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:26.849 job2: (groupid=0, jobs=1): err= 0: pid=2222258: Fri Nov 29 13:17:26 2024 00:33:26.849 read: IOPS=3633, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1004msec) 00:33:26.849 slat (nsec): min=1060, max=52030k, avg=150051.49, stdev=1453005.01 00:33:26.849 clat (usec): min=549, max=80719, avg=17225.15, stdev=13586.20 00:33:26.849 lat (usec): min=2604, max=80730, avg=17375.20, stdev=13668.12 00:33:26.849 clat percentiles (usec): 00:33:26.849 | 1.00th=[ 5014], 5.00th=[ 8356], 10.00th=[ 9634], 20.00th=[10159], 00:33:26.849 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12387], 60.00th=[13042], 00:33:26.849 | 70.00th=[14615], 80.00th=[20317], 90.00th=[29492], 95.00th=[49021], 00:33:26.849 | 99.00th=[74974], 99.50th=[74974], 99.90th=[80217], 99.95th=[80217], 00:33:26.849 | 99.99th=[80217] 00:33:26.849 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:33:26.849 slat (nsec): min=1869, max=10414k, avg=103921.35, stdev=604252.84 00:33:26.849 clat (usec): min=633, max=105618, avg=15395.72, stdev=13872.75 00:33:26.849 lat (usec): min=691, max=105625, avg=15499.64, stdev=13922.16 00:33:26.849 clat percentiles (msec): 00:33:26.849 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:33:26.849 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:33:26.849 | 70.00th=[ 14], 80.00th=[ 16], 90.00th=[ 23], 95.00th=[ 26], 00:33:26.849 | 99.00th=[ 100], 99.50th=[ 104], 99.90th=[ 106], 99.95th=[ 106], 00:33:26.849 | 99.99th=[ 106] 00:33:26.849 bw ( KiB/s): min=14920, max=17336, per=23.49%, avg=16128.00, stdev=1708.37, samples=2 00:33:26.849 iops : min= 3730, max= 4334, avg=4032.00, stdev=427.09, samples=2 00:33:26.849 lat (usec) : 750=0.03% 00:33:26.849 lat (msec) : 4=0.21%, 10=16.61%, 20=66.89%, 50=12.60%, 100=3.28% 00:33:26.849 lat (msec) : 250=0.39% 00:33:26.849 cpu : usr=1.79%, sys=4.49%, ctx=359, majf=0, minf=1 00:33:26.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:26.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:26.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:26.849 issued rwts: total=3648,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:26.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:26.849 job3: (groupid=0, jobs=1): err= 0: pid=2222269: Fri Nov 29 13:17:26 2024 00:33:26.849 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:33:26.849 slat (nsec): min=1110, max=13187k, avg=121353.29, stdev=730792.89 00:33:26.849 clat (usec): min=7688, max=42985, avg=15295.00, stdev=6102.26 00:33:26.849 lat (usec): min=7694, max=42990, avg=15416.35, stdev=6136.36 00:33:26.849 clat percentiles (usec): 00:33:26.849 | 1.00th=[ 8029], 5.00th=[10028], 10.00th=[10552], 20.00th=[11469], 00:33:26.849 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12911], 60.00th=[13698], 00:33:26.849 | 70.00th=[15795], 80.00th=[18744], 90.00th=[23987], 95.00th=[29230], 00:33:26.849 | 99.00th=[39060], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:26.849 | 99.99th=[42730] 00:33:26.849 write: IOPS=4275, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1002msec); 0 zone resets 00:33:26.849 slat (nsec): min=1877, max=12801k, avg=112741.90, stdev=645381.53 00:33:26.849 clat (usec): min=361, max=51481, avg=14896.46, stdev=7499.84 00:33:26.849 lat (usec): min=3164, max=51495, avg=15009.20, stdev=7550.22 00:33:26.849 clat percentiles (usec): 00:33:26.849 | 1.00th=[ 6259], 5.00th=[ 8029], 10.00th=[ 9503], 20.00th=[11076], 00:33:26.849 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[13435], 00:33:26.849 | 70.00th=[14746], 80.00th=[16450], 90.00th=[22938], 95.00th=[27919], 00:33:26.849 | 99.00th=[47973], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:33:26.849 | 99.99th=[51643] 00:33:26.849 bw ( KiB/s): min=16384, max=16864, per=24.21%, avg=16624.00, stdev=339.41, samples=2 00:33:26.849 iops : min= 4096, max= 4216, avg=4156.00, stdev=84.85, samples=2 00:33:26.849 lat (usec) : 500=0.01% 00:33:26.849 lat (msec) : 4=0.38%, 10=8.09%, 20=76.19%, 50=15.07%, 100=0.25% 00:33:26.849 cpu : usr=2.50%, sys=4.10%, ctx=425, majf=0, minf=1 00:33:26.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:26.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:26.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:26.849 issued rwts: total=4096,4284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:26.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:26.849 00:33:26.849 Run status group 0 (all jobs): 00:33:26.849 READ: bw=62.0MiB/s (65.0MB/s), 13.9MiB/s-19.9MiB/s (14.5MB/s-20.9MB/s), io=64.7MiB (67.9MB), run=1002-1044msec 00:33:26.849 WRITE: bw=67.0MiB/s (70.3MB/s), 15.3MiB/s-21.2MiB/s (16.1MB/s-22.2MB/s), io=70.0MiB (73.4MB), run=1002-1044msec 00:33:26.849 00:33:26.849 Disk stats (read/write): 00:33:26.849 nvme0n1: ios=2888/3072, merge=0/0, ticks=17219/18078, in_queue=35297, util=96.29% 00:33:26.849 nvme0n2: ios=3741/4096, merge=0/0, ticks=30011/30187, in_queue=60198, util=99.79% 00:33:26.849 nvme0n3: ios=3522/3584, merge=0/0, ticks=21523/17284, in_queue=38807, util=99.46% 00:33:26.849 nvme0n4: ios=3129/3582, merge=0/0, ticks=14948/17768, in_queue=32716, util=95.79% 00:33:26.849 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:26.849 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2222382 00:33:26.849 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:26.849 13:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:26.849 [global] 00:33:26.849 thread=1 00:33:26.849 invalidate=1 00:33:26.849 rw=read 00:33:26.849 time_based=1 00:33:26.849 runtime=10 00:33:26.849 ioengine=libaio 00:33:26.849 direct=1 00:33:26.849 bs=4096 00:33:26.849 iodepth=1 00:33:26.849 norandommap=1 00:33:26.849 numjobs=1 00:33:26.849 00:33:26.849 [job0] 00:33:26.849 filename=/dev/nvme0n1 00:33:26.849 [job1] 00:33:26.849 filename=/dev/nvme0n2 00:33:26.849 [job2] 00:33:26.849 filename=/dev/nvme0n3 00:33:26.849 [job3] 00:33:26.849 filename=/dev/nvme0n4 00:33:26.849 Could not set queue depth (nvme0n1) 00:33:26.849 Could not set queue depth (nvme0n2) 00:33:26.849 Could not set queue depth (nvme0n3) 00:33:26.849 Could not set queue depth (nvme0n4) 00:33:27.112 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:27.112 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:27.112 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:27.112 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:27.112 fio-3.35 00:33:27.112 Starting 4 threads 00:33:29.637 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:29.895 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=36790272, buflen=4096 00:33:29.895 fio: pid=2222671, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:29.895 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:30.154 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:30.154 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:30.154 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=38682624, buflen=4096 00:33:30.154 fio: pid=2222670, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:30.412 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45137920, buflen=4096 00:33:30.412 fio: pid=2222668, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:30.412 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:30.412 13:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:30.412 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=48328704, buflen=4096 00:33:30.412 fio: pid=2222669, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:30.412 13:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:30.412 13:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:30.670 00:33:30.670 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2222668: Fri Nov 29 13:17:30 2024 00:33:30.670 read: IOPS=3519, BW=13.7MiB/s (14.4MB/s)(43.0MiB/3131msec) 00:33:30.670 slat (usec): min=5, max=11657, avg=11.27, stdev=137.81 00:33:30.670 clat (usec): min=196, max=1543, avg=269.01, stdev=37.58 00:33:30.670 lat (usec): min=215, max=12014, avg=280.28, stdev=143.87 00:33:30.670 clat percentiles (usec): 00:33:30.670 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 247], 00:33:30.670 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:33:30.670 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 347], 00:33:30.670 | 99.00th=[ 412], 99.50th=[ 437], 99.90th=[ 502], 99.95th=[ 510], 00:33:30.670 | 99.99th=[ 523] 00:33:30.670 bw ( KiB/s): min=13424, max=14952, per=28.72%, avg=14172.33, stdev=711.68, samples=6 00:33:30.670 iops : min= 3356, max= 3738, avg=3543.00, stdev=177.86, samples=6 00:33:30.670 lat (usec) : 250=28.03%, 500=71.85%, 750=0.10% 00:33:30.670 lat (msec) : 2=0.01% 00:33:30.670 cpu : usr=1.53%, sys=5.56%, ctx=11027, majf=0, minf=1 00:33:30.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:30.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.671 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.671 issued rwts: total=11021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:30.671 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2222669: Fri Nov 29 13:17:30 2024 00:33:30.671 read: IOPS=3529, BW=13.8MiB/s (14.5MB/s)(46.1MiB/3343msec) 00:33:30.671 slat (usec): min=3, max=22565, avg=12.83, stdev=251.95 00:33:30.671 clat (usec): min=186, max=41182, avg=267.71, stdev=380.81 00:33:30.671 lat (usec): min=201, max=41192, avg=280.54, stdev=458.30 00:33:30.671 clat percentiles (usec): 00:33:30.671 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 231], 00:33:30.671 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 255], 00:33:30.671 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 330], 95.00th=[ 383], 00:33:30.671 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 515], 99.95th=[ 562], 00:33:30.671 | 99.99th=[ 1926] 00:33:30.671 bw ( KiB/s): min=12616, max=15952, per=28.80%, avg=14214.17, stdev=1457.61, samples=6 00:33:30.671 iops : min= 3154, max= 3988, avg=3553.50, stdev=364.38, samples=6 00:33:30.671 lat (usec) : 250=54.18%, 500=45.61%, 750=0.17% 00:33:30.671 lat (msec) : 2=0.03%, 50=0.01% 00:33:30.671 cpu : usr=1.71%, sys=6.19%, ctx=11804, majf=0, minf=2 00:33:30.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:30.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.671 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.671 issued rwts: total=11800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:30.671 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2222670: Fri Nov 29 13:17:30 2024 00:33:30.671 read: IOPS=3218, BW=12.6MiB/s (13.2MB/s)(36.9MiB/2935msec) 00:33:30.671 slat (nsec): min=5951, max=44558, avg=8661.62, stdev=1428.03 00:33:30.671 clat (usec): min=211, max=3935, avg=297.91, stdev=68.53 00:33:30.671 lat (usec): min=219, max=3949, avg=306.57, stdev=68.58 00:33:30.671 clat percentiles (usec): 00:33:30.671 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 260], 00:33:30.671 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 297], 00:33:30.671 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 371], 95.00th=[ 396], 00:33:30.671 | 99.00th=[ 445], 99.50th=[ 474], 99.90th=[ 510], 99.95th=[ 529], 00:33:30.671 | 99.99th=[ 3949] 00:33:30.671 bw ( KiB/s): min=12320, max=13320, per=25.92%, avg=12793.60, stdev=481.25, samples=5 00:33:30.671 iops : min= 3080, max= 3330, avg=3198.40, stdev=120.31, samples=5 00:33:30.671 lat (usec) : 250=11.03%, 500=88.69%, 750=0.24% 00:33:30.671 lat (msec) : 4=0.02% 00:33:30.671 cpu : usr=1.47%, sys=5.62%, ctx=9446, majf=0, minf=2 00:33:30.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:30.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.671 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.671 issued rwts: total=9445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:30.671 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2222671: Fri Nov 29 13:17:30 2024 00:33:30.671 read: IOPS=3292, BW=12.9MiB/s (13.5MB/s)(35.1MiB/2728msec) 00:33:30.671 slat (nsec): min=6656, max=31777, avg=7720.09, stdev=1037.68 00:33:30.671 clat (usec): min=261, max=2122, avg=292.57, stdev=32.35 00:33:30.671 lat (usec): min=268, max=2129, avg=300.29, stdev=32.38 00:33:30.671 clat percentiles (usec): 00:33:30.671 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 277], 20.00th=[ 281], 00:33:30.671 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 289], 60.00th=[ 293], 00:33:30.671 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 314], 00:33:30.671 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 510], 99.95th=[ 603], 00:33:30.671 | 99.99th=[ 2114] 00:33:30.671 bw ( KiB/s): min=13224, max=13376, per=26.94%, avg=13297.60, stdev=72.31, samples=5 00:33:30.671 iops : min= 3306, max= 3344, avg=3324.40, stdev=18.08, samples=5 00:33:30.671 lat (usec) : 500=99.88%, 750=0.07% 00:33:30.671 lat (msec) : 2=0.03%, 4=0.01% 00:33:30.671 cpu : usr=0.70%, sys=3.19%, ctx=8984, majf=0, minf=2 00:33:30.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:30.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.671 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.671 issued rwts: total=8983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:30.671 00:33:30.671 Run status group 0 (all jobs): 00:33:30.671 READ: bw=48.2MiB/s (50.5MB/s), 12.6MiB/s-13.8MiB/s (13.2MB/s-14.5MB/s), io=161MiB (169MB), run=2728-3343msec 00:33:30.671 00:33:30.671 Disk stats (read/write): 00:33:30.671 nvme0n1: ios=11023/0, merge=0/0, ticks=3561/0, in_queue=3561, util=99.45% 00:33:30.671 nvme0n2: ios=11035/0, merge=0/0, ticks=2858/0, in_queue=2858, util=96.04% 00:33:30.671 nvme0n3: ios=9267/0, merge=0/0, ticks=3661/0, in_queue=3661, util=99.46% 00:33:30.671 nvme0n4: ios=8684/0, merge=0/0, ticks=2610/0, in_queue=2610, util=99.33% 00:33:30.671 13:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:30.671 13:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:30.929 13:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:30.929 13:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:31.187 13:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:31.187 13:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:31.445 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:31.445 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2222382 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:31.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:31.704 nvmf hotplug test: fio failed as expected 00:33:31.704 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:31.963 rmmod nvme_tcp 00:33:31.963 rmmod nvme_fabrics 00:33:31.963 rmmod nvme_keyring 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2219845 ']' 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2219845 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2219845 ']' 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2219845 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2219845 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2219845' 00:33:31.963 killing process with pid 2219845 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2219845 00:33:31.963 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2219845 00:33:32.221 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:32.222 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:32.222 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:32.222 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:32.222 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:32.222 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:32.222 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:32.222 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:32.222 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:32.222 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.222 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:32.222 13:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.125 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:34.384 00:33:34.384 real 0m25.286s 00:33:34.384 user 1m30.159s 00:33:34.384 sys 0m11.401s 00:33:34.384 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:34.384 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:34.384 ************************************ 00:33:34.384 END TEST nvmf_fio_target 00:33:34.384 ************************************ 00:33:34.384 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:34.384 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:34.384 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:34.384 13:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:34.384 ************************************ 00:33:34.384 START TEST nvmf_bdevio 00:33:34.384 ************************************ 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:34.384 * Looking for test storage... 00:33:34.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:34.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.384 --rc genhtml_branch_coverage=1 00:33:34.384 --rc genhtml_function_coverage=1 00:33:34.384 --rc genhtml_legend=1 00:33:34.384 --rc geninfo_all_blocks=1 00:33:34.384 --rc geninfo_unexecuted_blocks=1 00:33:34.384 00:33:34.384 ' 00:33:34.384 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:34.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.384 --rc genhtml_branch_coverage=1 00:33:34.384 --rc genhtml_function_coverage=1 00:33:34.384 --rc genhtml_legend=1 00:33:34.384 --rc geninfo_all_blocks=1 00:33:34.384 --rc geninfo_unexecuted_blocks=1 00:33:34.384 00:33:34.385 ' 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.385 --rc genhtml_branch_coverage=1 00:33:34.385 --rc genhtml_function_coverage=1 00:33:34.385 --rc genhtml_legend=1 00:33:34.385 --rc geninfo_all_blocks=1 00:33:34.385 --rc geninfo_unexecuted_blocks=1 00:33:34.385 00:33:34.385 ' 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.385 --rc genhtml_branch_coverage=1 00:33:34.385 --rc genhtml_function_coverage=1 00:33:34.385 --rc genhtml_legend=1 00:33:34.385 --rc geninfo_all_blocks=1 00:33:34.385 --rc geninfo_unexecuted_blocks=1 00:33:34.385 00:33:34.385 ' 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:34.385 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:34.643 13:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:39.902 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:39.902 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.902 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:39.903 Found net devices under 0000:86:00.0: cvl_0_0 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:39.903 Found net devices under 0000:86:00.1: cvl_0_1 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:39.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:33:39.903 00:33:39.903 --- 10.0.0.2 ping statistics --- 00:33:39.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.903 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:33:39.903 00:33:39.903 --- 10.0.0.1 ping statistics --- 00:33:39.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.903 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2226816 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2226816 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2226816 ']' 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:39.903 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:39.903 [2024-11-29 13:17:39.630777] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:39.903 [2024-11-29 13:17:39.631797] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:33:39.903 [2024-11-29 13:17:39.631839] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.903 [2024-11-29 13:17:39.699181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:40.161 [2024-11-29 13:17:39.744204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.161 [2024-11-29 13:17:39.744240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.161 [2024-11-29 13:17:39.744247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.161 [2024-11-29 13:17:39.744254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.161 [2024-11-29 13:17:39.744259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.161 [2024-11-29 13:17:39.745871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:40.161 [2024-11-29 13:17:39.745999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:40.161 [2024-11-29 13:17:39.746107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:40.161 [2024-11-29 13:17:39.746108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:40.161 [2024-11-29 13:17:39.813310] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:40.161 [2024-11-29 13:17:39.813880] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:40.161 [2024-11-29 13:17:39.814259] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:40.161 [2024-11-29 13:17:39.814498] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:40.161 [2024-11-29 13:17:39.814550] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:40.161 [2024-11-29 13:17:39.882891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:40.161 Malloc0 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.161 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:40.162 [2024-11-29 13:17:39.958869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.162 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.162 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:40.162 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:40.162 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:40.162 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:40.162 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.162 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.162 { 00:33:40.162 "params": { 00:33:40.162 "name": "Nvme$subsystem", 00:33:40.162 "trtype": "$TEST_TRANSPORT", 00:33:40.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.162 "adrfam": "ipv4", 00:33:40.162 "trsvcid": "$NVMF_PORT", 00:33:40.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.162 "hdgst": ${hdgst:-false}, 00:33:40.162 "ddgst": ${ddgst:-false} 00:33:40.162 }, 00:33:40.162 "method": "bdev_nvme_attach_controller" 00:33:40.162 } 00:33:40.162 EOF 00:33:40.162 )") 00:33:40.162 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:40.162 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:40.162 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:40.162 13:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:40.162 "params": { 00:33:40.162 "name": "Nvme1", 00:33:40.162 "trtype": "tcp", 00:33:40.162 "traddr": "10.0.0.2", 00:33:40.162 "adrfam": "ipv4", 00:33:40.162 "trsvcid": "4420", 00:33:40.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:40.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:40.162 "hdgst": false, 00:33:40.162 "ddgst": false 00:33:40.162 }, 00:33:40.162 "method": "bdev_nvme_attach_controller" 00:33:40.162 }' 00:33:40.419 [2024-11-29 13:17:40.011215] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:33:40.419 [2024-11-29 13:17:40.011265] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226935 ] 00:33:40.419 [2024-11-29 13:17:40.076203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:40.419 [2024-11-29 13:17:40.121309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.419 [2024-11-29 13:17:40.121408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:40.419 [2024-11-29 13:17:40.121527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.676 I/O targets: 00:33:40.676 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:40.676 00:33:40.676 00:33:40.676 CUnit - A unit testing framework for C - Version 2.1-3 00:33:40.676 http://cunit.sourceforge.net/ 00:33:40.676 00:33:40.676 00:33:40.676 Suite: bdevio tests on: Nvme1n1 00:33:40.676 Test: blockdev write read block ...passed 00:33:40.676 Test: blockdev write zeroes read block ...passed 00:33:40.676 Test: blockdev write zeroes read no split ...passed 00:33:40.676 Test: blockdev write zeroes read split ...passed 00:33:40.676 Test: blockdev write zeroes read split partial ...passed 00:33:40.676 Test: blockdev reset ...[2024-11-29 13:17:40.460667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:40.676 [2024-11-29 13:17:40.460736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x564350 (9): Bad file descriptor 00:33:40.933 [2024-11-29 13:17:40.505970] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:40.933 passed 00:33:40.933 Test: blockdev write read 8 blocks ...passed 00:33:40.933 Test: blockdev write read size > 128k ...passed 00:33:40.933 Test: blockdev write read invalid size ...passed 00:33:40.933 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:40.933 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:40.933 Test: blockdev write read max offset ...passed 00:33:40.933 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:40.933 Test: blockdev writev readv 8 blocks ...passed 00:33:41.191 Test: blockdev writev readv 30 x 1block ...passed 00:33:41.191 Test: blockdev writev readv block ...passed 00:33:41.191 Test: blockdev writev readv size > 128k ...passed 00:33:41.191 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:41.191 Test: blockdev comparev and writev ...[2024-11-29 13:17:40.796912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:41.191 [2024-11-29 13:17:40.796941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:41.191 [2024-11-29 13:17:40.796960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:41.191 [2024-11-29 13:17:40.796970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.191 [2024-11-29 13:17:40.797273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:41.191 [2024-11-29 13:17:40.797283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:41.191 [2024-11-29 13:17:40.797294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:41.191 [2024-11-29 13:17:40.797302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:41.191 [2024-11-29 13:17:40.797595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:41.191 [2024-11-29 13:17:40.797606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:41.191 [2024-11-29 13:17:40.797618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:41.191 [2024-11-29 13:17:40.797625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:41.191 [2024-11-29 13:17:40.797916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:41.191 [2024-11-29 13:17:40.797926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:41.191 [2024-11-29 13:17:40.797937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:41.191 [2024-11-29 13:17:40.797945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:41.191 passed 00:33:41.191 Test: blockdev nvme passthru rw ...passed 00:33:41.191 Test: blockdev nvme passthru vendor specific ...[2024-11-29 13:17:40.880203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:41.191 [2024-11-29 13:17:40.880223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:41.191 [2024-11-29 13:17:40.880348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:41.191 [2024-11-29 13:17:40.880357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:41.191 [2024-11-29 13:17:40.880478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:41.191 [2024-11-29 13:17:40.880487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:41.191 [2024-11-29 13:17:40.880607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:41.191 [2024-11-29 13:17:40.880617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:41.191 passed 00:33:41.191 Test: blockdev nvme admin passthru ...passed 00:33:41.191 Test: blockdev copy ...passed 00:33:41.191 00:33:41.191 Run Summary: Type Total Ran Passed Failed Inactive 00:33:41.191 suites 1 1 n/a 0 0 00:33:41.191 tests 23 23 23 0 0 00:33:41.191 asserts 152 152 152 0 n/a 00:33:41.191 00:33:41.191 Elapsed time = 1.319 seconds 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:41.449 rmmod nvme_tcp 00:33:41.449 rmmod nvme_fabrics 00:33:41.449 rmmod nvme_keyring 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2226816 ']' 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2226816 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2226816 ']' 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2226816 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2226816 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:33:41.449 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:33:41.450 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2226816' 00:33:41.450 killing process with pid 2226816 00:33:41.450 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2226816 00:33:41.450 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2226816 00:33:41.709 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:41.709 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:41.709 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:41.709 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:41.709 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:41.709 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:41.709 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:41.709 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:41.709 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:41.709 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.709 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.709 13:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.241 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:44.241 00:33:44.241 real 0m9.426s 00:33:44.241 user 0m8.679s 00:33:44.241 sys 0m4.832s 00:33:44.241 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:44.241 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:44.241 ************************************ 00:33:44.241 END TEST nvmf_bdevio 00:33:44.241 ************************************ 00:33:44.241 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:44.241 00:33:44.241 real 4m24.916s 00:33:44.241 user 9m1.654s 00:33:44.241 sys 1m47.193s 00:33:44.241 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:44.241 13:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:44.241 ************************************ 00:33:44.241 END TEST nvmf_target_core_interrupt_mode 00:33:44.241 ************************************ 00:33:44.241 13:17:43 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:44.241 13:17:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:44.241 13:17:43 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:44.241 13:17:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:44.241 ************************************ 00:33:44.241 START TEST nvmf_interrupt 00:33:44.241 ************************************ 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:44.241 * Looking for test storage... 00:33:44.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:44.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.241 --rc genhtml_branch_coverage=1 00:33:44.241 --rc genhtml_function_coverage=1 00:33:44.241 --rc genhtml_legend=1 00:33:44.241 --rc geninfo_all_blocks=1 00:33:44.241 --rc geninfo_unexecuted_blocks=1 00:33:44.241 00:33:44.241 ' 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:44.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.241 --rc genhtml_branch_coverage=1 00:33:44.241 --rc genhtml_function_coverage=1 00:33:44.241 --rc genhtml_legend=1 00:33:44.241 --rc geninfo_all_blocks=1 00:33:44.241 --rc geninfo_unexecuted_blocks=1 00:33:44.241 00:33:44.241 ' 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:44.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.241 --rc genhtml_branch_coverage=1 00:33:44.241 --rc genhtml_function_coverage=1 00:33:44.241 --rc genhtml_legend=1 00:33:44.241 --rc geninfo_all_blocks=1 00:33:44.241 --rc geninfo_unexecuted_blocks=1 00:33:44.241 00:33:44.241 ' 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:44.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.241 --rc genhtml_branch_coverage=1 00:33:44.241 --rc genhtml_function_coverage=1 00:33:44.241 --rc genhtml_legend=1 00:33:44.241 --rc geninfo_all_blocks=1 00:33:44.241 --rc geninfo_unexecuted_blocks=1 00:33:44.241 00:33:44.241 ' 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.241 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:44.242 13:17:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:49.505 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:49.505 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:49.505 Found net devices under 0000:86:00.0: cvl_0_0 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.505 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:49.506 Found net devices under 0000:86:00.1: cvl_0_1 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:49.506 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:49.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:49.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:33:49.764 00:33:49.764 --- 10.0.0.2 ping statistics --- 00:33:49.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.764 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:49.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:49.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:33:49.764 00:33:49.764 --- 10.0.0.1 ping statistics --- 00:33:49.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.764 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2230504 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2230504 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2230504 ']' 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:49.764 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:49.764 [2024-11-29 13:17:49.517477] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:49.764 [2024-11-29 13:17:49.518443] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:33:49.764 [2024-11-29 13:17:49.518479] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:50.023 [2024-11-29 13:17:49.585059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:50.023 [2024-11-29 13:17:49.627201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:50.023 [2024-11-29 13:17:49.627240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:50.023 [2024-11-29 13:17:49.627247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:50.023 [2024-11-29 13:17:49.627253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:50.023 [2024-11-29 13:17:49.627258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:50.023 [2024-11-29 13:17:49.628446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.023 [2024-11-29 13:17:49.628451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.023 [2024-11-29 13:17:49.696354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:50.023 [2024-11-29 13:17:49.696553] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:50.023 [2024-11-29 13:17:49.696623] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:50.023 5000+0 records in 00:33:50.023 5000+0 records out 00:33:50.023 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0170099 s, 602 MB/s 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:50.023 AIO0 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:50.023 [2024-11-29 13:17:49.821219] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.023 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:50.281 [2024-11-29 13:17:49.845104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2230504 0 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2230504 0 idle 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2230504 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2230504 -w 256 00:33:50.281 13:17:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2230504 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.23 reactor_0' 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2230504 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.23 reactor_0 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2230504 1 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2230504 1 idle 00:33:50.281 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2230504 00:33:50.282 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:50.282 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:50.282 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:50.282 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:50.282 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:50.282 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:50.282 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:50.282 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:50.282 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:50.282 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2230504 -w 256 00:33:50.282 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2230551 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2230551 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2230746 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2230504 0 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2230504 0 busy 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2230504 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2230504 -w 256 00:33:50.539 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:50.796 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2230504 root 20 0 128.2g 47616 34560 R 93.8 0.0 0:00.38 reactor_0' 00:33:50.796 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2230504 root 20 0 128.2g 47616 34560 R 93.8 0.0 0:00.38 reactor_0 00:33:50.796 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:50.796 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:50.796 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:33:50.796 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:33:50.796 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:50.796 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:50.796 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:50.796 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:50.796 13:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:50.796 13:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:50.796 13:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2230504 1 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2230504 1 busy 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2230504 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2230504 -w 256 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2230551 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.25 reactor_1' 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2230551 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:00.25 reactor_1 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:50.797 13:17:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2230746 00:34:00.762 Initializing NVMe Controllers 00:34:00.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:00.762 Controller IO queue size 256, less than required. 00:34:00.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:00.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:00.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:00.762 Initialization complete. Launching workers. 00:34:00.762 ======================================================== 00:34:00.762 Latency(us) 00:34:00.762 Device Information : IOPS MiB/s Average min max 00:34:00.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16095.60 62.87 15914.39 2758.48 20422.32 00:34:00.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 15978.60 62.42 16029.24 4215.65 20099.98 00:34:00.762 ======================================================== 00:34:00.762 Total : 32074.20 125.29 15971.61 2758.48 20422.32 00:34:00.762 00:34:00.762 13:18:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:00.762 13:18:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2230504 0 00:34:00.762 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2230504 0 idle 00:34:00.762 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2230504 00:34:00.762 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:00.762 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:00.762 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:00.762 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:00.762 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:00.762 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:00.762 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:00.762 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:00.762 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2230504 -w 256 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2230504 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.23 reactor_0' 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2230504 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.23 reactor_0 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2230504 1 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2230504 1 idle 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2230504 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2230504 -w 256 00:34:00.763 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:01.021 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2230551 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:09.99 reactor_1' 00:34:01.021 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2230551 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:09.99 reactor_1 00:34:01.021 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:01.021 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:01.021 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:01.021 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:01.021 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:01.021 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:01.021 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:01.021 13:18:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:01.021 13:18:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:01.279 13:18:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:01.279 13:18:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:34:01.279 13:18:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:01.279 13:18:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:01.279 13:18:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2230504 0 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2230504 0 idle 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2230504 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2230504 -w 256 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2230504 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.37 reactor_0' 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2230504 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.37 reactor_0 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2230504 1 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2230504 1 idle 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2230504 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2230504 -w 256 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2230551 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.04 reactor_1' 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2230551 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.04 reactor_1 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:03.810 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:03.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:03.811 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:03.811 rmmod nvme_tcp 00:34:04.070 rmmod nvme_fabrics 00:34:04.070 rmmod nvme_keyring 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2230504 ']' 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2230504 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2230504 ']' 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2230504 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2230504 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2230504' 00:34:04.070 killing process with pid 2230504 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2230504 00:34:04.070 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2230504 00:34:04.329 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:04.329 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:04.329 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:04.329 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:04.329 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:04.329 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:04.329 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:04.329 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:04.329 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:04.329 13:18:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.329 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:04.329 13:18:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.230 13:18:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:06.230 00:34:06.230 real 0m22.426s 00:34:06.230 user 0m39.483s 00:34:06.230 sys 0m8.137s 00:34:06.230 13:18:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.230 13:18:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:06.230 ************************************ 00:34:06.230 END TEST nvmf_interrupt 00:34:06.230 ************************************ 00:34:06.230 00:34:06.230 real 26m39.284s 00:34:06.230 user 55m59.887s 00:34:06.230 sys 8m49.558s 00:34:06.230 13:18:06 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.230 13:18:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:06.230 ************************************ 00:34:06.230 END TEST nvmf_tcp 00:34:06.230 ************************************ 00:34:06.230 13:18:06 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:34:06.230 13:18:06 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:06.230 13:18:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:06.230 13:18:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:06.230 13:18:06 -- common/autotest_common.sh@10 -- # set +x 00:34:06.489 ************************************ 00:34:06.489 START TEST spdkcli_nvmf_tcp 00:34:06.489 ************************************ 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:06.489 * Looking for test storage... 00:34:06.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:06.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.489 --rc genhtml_branch_coverage=1 00:34:06.489 --rc genhtml_function_coverage=1 00:34:06.489 --rc genhtml_legend=1 00:34:06.489 --rc geninfo_all_blocks=1 00:34:06.489 --rc geninfo_unexecuted_blocks=1 00:34:06.489 00:34:06.489 ' 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:06.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.489 --rc genhtml_branch_coverage=1 00:34:06.489 --rc genhtml_function_coverage=1 00:34:06.489 --rc genhtml_legend=1 00:34:06.489 --rc geninfo_all_blocks=1 00:34:06.489 --rc geninfo_unexecuted_blocks=1 00:34:06.489 00:34:06.489 ' 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:06.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.489 --rc genhtml_branch_coverage=1 00:34:06.489 --rc genhtml_function_coverage=1 00:34:06.489 --rc genhtml_legend=1 00:34:06.489 --rc geninfo_all_blocks=1 00:34:06.489 --rc geninfo_unexecuted_blocks=1 00:34:06.489 00:34:06.489 ' 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:06.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.489 --rc genhtml_branch_coverage=1 00:34:06.489 --rc genhtml_function_coverage=1 00:34:06.489 --rc genhtml_legend=1 00:34:06.489 --rc geninfo_all_blocks=1 00:34:06.489 --rc geninfo_unexecuted_blocks=1 00:34:06.489 00:34:06.489 ' 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:06.489 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:06.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2233555 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2233555 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2233555 ']' 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:06.490 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:06.748 [2024-11-29 13:18:06.330435] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:34:06.748 [2024-11-29 13:18:06.330484] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2233555 ] 00:34:06.748 [2024-11-29 13:18:06.391355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:06.748 [2024-11-29 13:18:06.437410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.748 [2024-11-29 13:18:06.437420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.748 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:06.748 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:34:06.748 13:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:06.748 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:06.748 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:06.748 13:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:07.007 13:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:07.007 13:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:07.007 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:07.007 13:18:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.007 13:18:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:07.007 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:07.007 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:07.007 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:07.007 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:07.007 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:07.007 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:07.007 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:07.008 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:07.008 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:07.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:07.008 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:07.008 ' 00:34:09.540 [2024-11-29 13:18:09.063161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.476 [2024-11-29 13:18:10.287284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:13.006 [2024-11-29 13:18:12.534201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:14.906 [2024-11-29 13:18:14.464101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:16.281 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:16.281 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:16.281 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:16.281 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:16.281 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:16.281 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:16.281 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:16.281 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:16.281 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:16.281 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:16.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:16.281 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:16.281 13:18:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:16.281 13:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:16.281 13:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.281 13:18:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:16.281 13:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.281 13:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.539 13:18:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:16.539 13:18:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:16.797 13:18:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:16.797 13:18:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:16.797 13:18:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:16.797 13:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:16.797 13:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.797 13:18:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:16.797 13:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.797 13:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.797 13:18:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:16.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:16.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:16.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:16.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:16.797 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:16.797 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:16.797 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:16.797 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:16.797 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:16.797 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:16.797 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:16.797 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:16.797 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:16.797 ' 00:34:22.064 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:22.064 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:22.064 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:22.064 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:22.064 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:22.064 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:22.064 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:22.064 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:22.064 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:22.064 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:22.064 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:22.064 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:22.064 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:22.064 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2233555 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2233555 ']' 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2233555 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2233555 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2233555' 00:34:22.064 killing process with pid 2233555 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2233555 00:34:22.064 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2233555 00:34:22.323 13:18:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:22.323 13:18:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:22.323 13:18:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2233555 ']' 00:34:22.323 13:18:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2233555 00:34:22.323 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2233555 ']' 00:34:22.323 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2233555 00:34:22.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2233555) - No such process 00:34:22.323 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2233555 is not found' 00:34:22.323 Process with pid 2233555 is not found 00:34:22.323 13:18:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:22.323 13:18:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:22.323 13:18:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:22.323 00:34:22.323 real 0m15.851s 00:34:22.323 user 0m33.009s 00:34:22.323 sys 0m0.674s 00:34:22.323 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.323 13:18:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.323 ************************************ 00:34:22.323 END TEST spdkcli_nvmf_tcp 00:34:22.323 ************************************ 00:34:22.323 13:18:21 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:22.323 13:18:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:22.323 13:18:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.323 13:18:21 -- common/autotest_common.sh@10 -- # set +x 00:34:22.323 ************************************ 00:34:22.323 START TEST nvmf_identify_passthru 00:34:22.323 ************************************ 00:34:22.323 13:18:21 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:22.323 * Looking for test storage... 00:34:22.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:22.323 13:18:22 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:22.323 13:18:22 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:34:22.323 13:18:22 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:22.323 13:18:22 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.323 13:18:22 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:22.582 13:18:22 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:22.582 13:18:22 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.582 13:18:22 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:22.582 13:18:22 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.582 13:18:22 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.582 13:18:22 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.582 13:18:22 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:22.582 13:18:22 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.582 13:18:22 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:22.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.582 --rc genhtml_branch_coverage=1 00:34:22.582 --rc genhtml_function_coverage=1 00:34:22.582 --rc genhtml_legend=1 00:34:22.582 --rc geninfo_all_blocks=1 00:34:22.582 --rc geninfo_unexecuted_blocks=1 00:34:22.582 00:34:22.582 ' 00:34:22.582 13:18:22 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:22.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.582 --rc genhtml_branch_coverage=1 00:34:22.582 --rc genhtml_function_coverage=1 00:34:22.582 --rc genhtml_legend=1 00:34:22.582 --rc geninfo_all_blocks=1 00:34:22.582 --rc geninfo_unexecuted_blocks=1 00:34:22.582 00:34:22.582 ' 00:34:22.582 13:18:22 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:22.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.582 --rc genhtml_branch_coverage=1 00:34:22.582 --rc genhtml_function_coverage=1 00:34:22.582 --rc genhtml_legend=1 00:34:22.582 --rc geninfo_all_blocks=1 00:34:22.582 --rc geninfo_unexecuted_blocks=1 00:34:22.582 00:34:22.582 ' 00:34:22.582 13:18:22 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:22.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.582 --rc genhtml_branch_coverage=1 00:34:22.582 --rc genhtml_function_coverage=1 00:34:22.582 --rc genhtml_legend=1 00:34:22.582 --rc geninfo_all_blocks=1 00:34:22.582 --rc geninfo_unexecuted_blocks=1 00:34:22.582 00:34:22.582 ' 00:34:22.582 13:18:22 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.582 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.582 13:18:22 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.582 13:18:22 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.582 13:18:22 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.582 13:18:22 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.583 13:18:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.583 13:18:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.583 13:18:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.583 13:18:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:22.583 13:18:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:22.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.583 13:18:22 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.583 13:18:22 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.583 13:18:22 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.583 13:18:22 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.583 13:18:22 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.583 13:18:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.583 13:18:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.583 13:18:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.583 13:18:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:22.583 13:18:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.583 13:18:22 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.583 13:18:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:22.583 13:18:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:22.583 13:18:22 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:22.583 13:18:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:27.848 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:27.848 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:27.848 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:27.849 Found net devices under 0000:86:00.0: cvl_0_0 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:27.849 Found net devices under 0000:86:00.1: cvl_0_1 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:27.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:27.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:34:27.849 00:34:27.849 --- 10.0.0.2 ping statistics --- 00:34:27.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:27.849 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:27.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:27.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:34:27.849 00:34:27.849 --- 10.0.0.1 ping statistics --- 00:34:27.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:27.849 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:27.849 13:18:27 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:27.849 13:18:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:27.849 13:18:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:34:27.849 13:18:27 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:34:27.849 13:18:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:34:27.849 13:18:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:34:27.849 13:18:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:27.849 13:18:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:27.849 13:18:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:32.031 13:18:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:34:32.031 13:18:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:32.031 13:18:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:32.031 13:18:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:36.216 13:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:36.216 13:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:36.216 13:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:36.216 13:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2240805 00:34:36.216 13:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:36.216 13:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2240805 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2240805 ']' 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:36.216 13:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:36.216 [2024-11-29 13:18:35.664612] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:34:36.216 [2024-11-29 13:18:35.664662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:36.216 [2024-11-29 13:18:35.730462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:36.216 [2024-11-29 13:18:35.775144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:36.216 [2024-11-29 13:18:35.775181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:36.216 [2024-11-29 13:18:35.775187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:36.216 [2024-11-29 13:18:35.775193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:36.216 [2024-11-29 13:18:35.775198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:36.216 [2024-11-29 13:18:35.776787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:36.216 [2024-11-29 13:18:35.776887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:36.216 [2024-11-29 13:18:35.776993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:36.216 [2024-11-29 13:18:35.776995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:34:36.216 13:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:36.216 INFO: Log level set to 20 00:34:36.216 INFO: Requests: 00:34:36.216 { 00:34:36.216 "jsonrpc": "2.0", 00:34:36.216 "method": "nvmf_set_config", 00:34:36.216 "id": 1, 00:34:36.216 "params": { 00:34:36.216 "admin_cmd_passthru": { 00:34:36.216 "identify_ctrlr": true 00:34:36.216 } 00:34:36.216 } 00:34:36.216 } 00:34:36.216 00:34:36.216 INFO: response: 00:34:36.216 { 00:34:36.216 "jsonrpc": "2.0", 00:34:36.216 "id": 1, 00:34:36.216 "result": true 00:34:36.216 } 00:34:36.216 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.216 13:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.216 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:36.216 INFO: Setting log level to 20 00:34:36.216 INFO: Setting log level to 20 00:34:36.216 INFO: Log level set to 20 00:34:36.216 INFO: Log level set to 20 00:34:36.216 INFO: Requests: 00:34:36.216 { 00:34:36.217 "jsonrpc": "2.0", 00:34:36.217 "method": "framework_start_init", 00:34:36.217 "id": 1 00:34:36.217 } 00:34:36.217 00:34:36.217 INFO: Requests: 00:34:36.217 { 00:34:36.217 "jsonrpc": "2.0", 00:34:36.217 "method": "framework_start_init", 00:34:36.217 "id": 1 00:34:36.217 } 00:34:36.217 00:34:36.217 [2024-11-29 13:18:35.893385] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:36.217 INFO: response: 00:34:36.217 { 00:34:36.217 "jsonrpc": "2.0", 00:34:36.217 "id": 1, 00:34:36.217 "result": true 00:34:36.217 } 00:34:36.217 00:34:36.217 INFO: response: 00:34:36.217 { 00:34:36.217 "jsonrpc": "2.0", 00:34:36.217 "id": 1, 00:34:36.217 "result": true 00:34:36.217 } 00:34:36.217 00:34:36.217 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.217 13:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:36.217 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.217 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:36.217 INFO: Setting log level to 40 00:34:36.217 INFO: Setting log level to 40 00:34:36.217 INFO: Setting log level to 40 00:34:36.217 [2024-11-29 13:18:35.906738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.217 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.217 13:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:36.217 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:36.217 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:36.217 13:18:35 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:34:36.217 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.217 13:18:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:39.496 Nvme0n1 00:34:39.496 13:18:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.496 13:18:38 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:39.496 13:18:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.496 13:18:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:39.496 13:18:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.496 13:18:38 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:39.496 13:18:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.496 13:18:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:39.496 13:18:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.496 13:18:38 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:39.496 13:18:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.496 13:18:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:39.496 [2024-11-29 13:18:38.814040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:39.496 13:18:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.496 13:18:38 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:39.496 13:18:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.496 13:18:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:39.496 [ 00:34:39.496 { 00:34:39.496 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:39.496 "subtype": "Discovery", 00:34:39.496 "listen_addresses": [], 00:34:39.496 "allow_any_host": true, 00:34:39.496 "hosts": [] 00:34:39.496 }, 00:34:39.496 { 00:34:39.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:39.496 "subtype": "NVMe", 00:34:39.496 "listen_addresses": [ 00:34:39.496 { 00:34:39.496 "trtype": "TCP", 00:34:39.496 "adrfam": "IPv4", 00:34:39.496 "traddr": "10.0.0.2", 00:34:39.496 "trsvcid": "4420" 00:34:39.496 } 00:34:39.496 ], 00:34:39.496 "allow_any_host": true, 00:34:39.496 "hosts": [], 00:34:39.496 "serial_number": "SPDK00000000000001", 00:34:39.496 "model_number": "SPDK bdev Controller", 00:34:39.496 "max_namespaces": 1, 00:34:39.496 "min_cntlid": 1, 00:34:39.496 "max_cntlid": 65519, 00:34:39.496 "namespaces": [ 00:34:39.496 { 00:34:39.496 "nsid": 1, 00:34:39.496 "bdev_name": "Nvme0n1", 00:34:39.496 "name": "Nvme0n1", 00:34:39.496 "nguid": "D1A2D52913FC413AAFB59290BB5DD20B", 00:34:39.496 "uuid": "d1a2d529-13fc-413a-afb5-9290bb5dd20b" 00:34:39.496 } 00:34:39.496 ] 00:34:39.496 } 00:34:39.496 ] 00:34:39.496 13:18:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.496 13:18:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:39.496 13:18:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:39.496 13:18:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:39.496 13:18:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:34:39.496 13:18:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:39.496 13:18:38 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:39.496 13:18:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:39.496 13:18:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:39.496 13:18:39 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:34:39.496 13:18:39 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:39.496 13:18:39 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:39.496 13:18:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.496 13:18:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:39.496 13:18:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.496 13:18:39 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:39.496 13:18:39 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:39.496 13:18:39 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:39.496 13:18:39 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:39.496 13:18:39 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:39.496 13:18:39 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:39.496 13:18:39 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:39.496 13:18:39 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:39.496 rmmod nvme_tcp 00:34:39.754 rmmod nvme_fabrics 00:34:39.754 rmmod nvme_keyring 00:34:39.754 13:18:39 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:39.754 13:18:39 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:39.754 13:18:39 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:39.754 13:18:39 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2240805 ']' 00:34:39.754 13:18:39 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2240805 00:34:39.754 13:18:39 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2240805 ']' 00:34:39.754 13:18:39 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2240805 00:34:39.754 13:18:39 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:39.754 13:18:39 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:39.754 13:18:39 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2240805 00:34:39.754 13:18:39 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:39.754 13:18:39 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:39.754 13:18:39 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2240805' 00:34:39.754 killing process with pid 2240805 00:34:39.754 13:18:39 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2240805 00:34:39.754 13:18:39 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2240805 00:34:41.128 13:18:40 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:41.128 13:18:40 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:41.128 13:18:40 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:41.128 13:18:40 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:41.128 13:18:40 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:41.128 13:18:40 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:41.128 13:18:40 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:41.128 13:18:40 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:41.128 13:18:40 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:41.128 13:18:40 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.128 13:18:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:41.128 13:18:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.662 13:18:42 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:43.662 00:34:43.662 real 0m20.950s 00:34:43.662 user 0m26.662s 00:34:43.662 sys 0m5.614s 00:34:43.662 13:18:42 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:43.662 13:18:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:43.662 ************************************ 00:34:43.662 END TEST nvmf_identify_passthru 00:34:43.662 ************************************ 00:34:43.662 13:18:42 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:43.662 13:18:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:43.662 13:18:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:43.662 13:18:42 -- common/autotest_common.sh@10 -- # set +x 00:34:43.662 ************************************ 00:34:43.662 START TEST nvmf_dif 00:34:43.662 ************************************ 00:34:43.662 13:18:43 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:43.662 * Looking for test storage... 00:34:43.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:43.662 13:18:43 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:43.662 13:18:43 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:34:43.662 13:18:43 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:43.663 13:18:43 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:43.663 13:18:43 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:43.663 13:18:43 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:43.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.663 --rc genhtml_branch_coverage=1 00:34:43.663 --rc genhtml_function_coverage=1 00:34:43.663 --rc genhtml_legend=1 00:34:43.663 --rc geninfo_all_blocks=1 00:34:43.663 --rc geninfo_unexecuted_blocks=1 00:34:43.663 00:34:43.663 ' 00:34:43.663 13:18:43 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:43.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.663 --rc genhtml_branch_coverage=1 00:34:43.663 --rc genhtml_function_coverage=1 00:34:43.663 --rc genhtml_legend=1 00:34:43.663 --rc geninfo_all_blocks=1 00:34:43.663 --rc geninfo_unexecuted_blocks=1 00:34:43.663 00:34:43.663 ' 00:34:43.663 13:18:43 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:43.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.663 --rc genhtml_branch_coverage=1 00:34:43.663 --rc genhtml_function_coverage=1 00:34:43.663 --rc genhtml_legend=1 00:34:43.663 --rc geninfo_all_blocks=1 00:34:43.663 --rc geninfo_unexecuted_blocks=1 00:34:43.663 00:34:43.663 ' 00:34:43.663 13:18:43 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:43.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.663 --rc genhtml_branch_coverage=1 00:34:43.663 --rc genhtml_function_coverage=1 00:34:43.663 --rc genhtml_legend=1 00:34:43.663 --rc geninfo_all_blocks=1 00:34:43.663 --rc geninfo_unexecuted_blocks=1 00:34:43.663 00:34:43.663 ' 00:34:43.663 13:18:43 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.663 13:18:43 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.663 13:18:43 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.663 13:18:43 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.663 13:18:43 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.663 13:18:43 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:43.663 13:18:43 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:43.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:43.663 13:18:43 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:43.663 13:18:43 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:43.663 13:18:43 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:43.663 13:18:43 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:43.663 13:18:43 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.663 13:18:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:43.663 13:18:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:43.663 13:18:43 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:43.663 13:18:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:48.929 13:18:48 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:48.930 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:48.930 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:48.930 Found net devices under 0000:86:00.0: cvl_0_0 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:48.930 Found net devices under 0000:86:00.1: cvl_0_1 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:48.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:48.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:34:48.930 00:34:48.930 --- 10.0.0.2 ping statistics --- 00:34:48.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.930 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:48.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:48.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:34:48.930 00:34:48.930 --- 10.0.0.1 ping statistics --- 00:34:48.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.930 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:48.930 13:18:48 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:51.453 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:51.453 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:51.453 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:51.453 13:18:51 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:51.453 13:18:51 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:51.453 13:18:51 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:51.453 13:18:51 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:51.453 13:18:51 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:51.453 13:18:51 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:51.453 13:18:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:51.453 13:18:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:51.453 13:18:51 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:51.453 13:18:51 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:51.453 13:18:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:51.453 13:18:51 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2246214 00:34:51.453 13:18:51 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2246214 00:34:51.453 13:18:51 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2246214 ']' 00:34:51.453 13:18:51 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.453 13:18:51 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:51.453 13:18:51 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.453 13:18:51 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:51.453 13:18:51 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:51.453 13:18:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:51.453 [2024-11-29 13:18:51.144220] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:34:51.453 [2024-11-29 13:18:51.144267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:51.453 [2024-11-29 13:18:51.210083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.453 [2024-11-29 13:18:51.251667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:51.453 [2024-11-29 13:18:51.251701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:51.453 [2024-11-29 13:18:51.251708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:51.453 [2024-11-29 13:18:51.251714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:51.454 [2024-11-29 13:18:51.251719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:51.454 [2024-11-29 13:18:51.252260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.710 13:18:51 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:51.710 13:18:51 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:51.710 13:18:51 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:51.710 13:18:51 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:51.710 13:18:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:51.710 13:18:51 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:51.710 13:18:51 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:51.710 13:18:51 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:51.710 13:18:51 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.710 13:18:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:51.710 [2024-11-29 13:18:51.384847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:51.710 13:18:51 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.710 13:18:51 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:51.710 13:18:51 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:51.710 13:18:51 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:51.710 13:18:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:51.710 ************************************ 00:34:51.710 START TEST fio_dif_1_default 00:34:51.710 ************************************ 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:51.710 bdev_null0 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.710 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:51.711 [2024-11-29 13:18:51.445128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:51.711 { 00:34:51.711 "params": { 00:34:51.711 "name": "Nvme$subsystem", 00:34:51.711 "trtype": "$TEST_TRANSPORT", 00:34:51.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:51.711 "adrfam": "ipv4", 00:34:51.711 "trsvcid": "$NVMF_PORT", 00:34:51.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:51.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:51.711 "hdgst": ${hdgst:-false}, 00:34:51.711 "ddgst": ${ddgst:-false} 00:34:51.711 }, 00:34:51.711 "method": "bdev_nvme_attach_controller" 00:34:51.711 } 00:34:51.711 EOF 00:34:51.711 )") 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:51.711 "params": { 00:34:51.711 "name": "Nvme0", 00:34:51.711 "trtype": "tcp", 00:34:51.711 "traddr": "10.0.0.2", 00:34:51.711 "adrfam": "ipv4", 00:34:51.711 "trsvcid": "4420", 00:34:51.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:51.711 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:51.711 "hdgst": false, 00:34:51.711 "ddgst": false 00:34:51.711 }, 00:34:51.711 "method": "bdev_nvme_attach_controller" 00:34:51.711 }' 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:51.711 13:18:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.282 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:52.282 fio-3.35 00:34:52.282 Starting 1 thread 00:35:04.463 00:35:04.463 filename0: (groupid=0, jobs=1): err= 0: pid=2246587: Fri Nov 29 13:19:02 2024 00:35:04.463 read: IOPS=188, BW=755KiB/s (774kB/s)(7584KiB/10040msec) 00:35:04.463 slat (nsec): min=6018, max=30888, avg=6365.00, stdev=922.99 00:35:04.463 clat (usec): min=435, max=45478, avg=21162.94, stdev=20564.49 00:35:04.463 lat (usec): min=441, max=45509, avg=21169.30, stdev=20564.43 00:35:04.463 clat percentiles (usec): 00:35:04.463 | 1.00th=[ 445], 5.00th=[ 465], 10.00th=[ 478], 20.00th=[ 490], 00:35:04.463 | 30.00th=[ 502], 40.00th=[ 586], 50.00th=[40633], 60.00th=[41157], 00:35:04.463 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:35:04.463 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:35:04.463 | 99.99th=[45351] 00:35:04.463 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=756.80, stdev=26.01, samples=20 00:35:04.463 iops : min= 168, max= 192, avg=189.20, stdev= 6.50, samples=20 00:35:04.463 lat (usec) : 500=28.96%, 750=20.83% 00:35:04.463 lat (msec) : 50=50.21% 00:35:04.463 cpu : usr=92.44%, sys=7.31%, ctx=7, majf=0, minf=0 00:35:04.463 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:04.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.463 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.463 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:04.463 00:35:04.463 Run status group 0 (all jobs): 00:35:04.463 READ: bw=755KiB/s (774kB/s), 755KiB/s-755KiB/s (774kB/s-774kB/s), io=7584KiB (7766kB), run=10040-10040msec 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.463 00:35:04.463 real 0m11.221s 00:35:04.463 user 0m15.995s 00:35:04.463 sys 0m1.091s 00:35:04.463 13:19:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:04.464 ************************************ 00:35:04.464 END TEST fio_dif_1_default 00:35:04.464 ************************************ 00:35:04.464 13:19:02 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:04.464 13:19:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:04.464 13:19:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:04.464 13:19:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:04.464 ************************************ 00:35:04.464 START TEST fio_dif_1_multi_subsystems 00:35:04.464 ************************************ 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:04.464 bdev_null0 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:04.464 [2024-11-29 13:19:02.737325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:04.464 bdev_null1 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:04.464 { 00:35:04.464 "params": { 00:35:04.464 "name": "Nvme$subsystem", 00:35:04.464 "trtype": "$TEST_TRANSPORT", 00:35:04.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:04.464 "adrfam": "ipv4", 00:35:04.464 "trsvcid": "$NVMF_PORT", 00:35:04.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:04.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:04.464 "hdgst": ${hdgst:-false}, 00:35:04.464 "ddgst": ${ddgst:-false} 00:35:04.464 }, 00:35:04.464 "method": "bdev_nvme_attach_controller" 00:35:04.464 } 00:35:04.464 EOF 00:35:04.464 )") 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:04.464 { 00:35:04.464 "params": { 00:35:04.464 "name": "Nvme$subsystem", 00:35:04.464 "trtype": "$TEST_TRANSPORT", 00:35:04.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:04.464 "adrfam": "ipv4", 00:35:04.464 "trsvcid": "$NVMF_PORT", 00:35:04.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:04.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:04.464 "hdgst": ${hdgst:-false}, 00:35:04.464 "ddgst": ${ddgst:-false} 00:35:04.464 }, 00:35:04.464 "method": "bdev_nvme_attach_controller" 00:35:04.464 } 00:35:04.464 EOF 00:35:04.464 )") 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:04.464 "params": { 00:35:04.464 "name": "Nvme0", 00:35:04.464 "trtype": "tcp", 00:35:04.464 "traddr": "10.0.0.2", 00:35:04.464 "adrfam": "ipv4", 00:35:04.464 "trsvcid": "4420", 00:35:04.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:04.464 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:04.464 "hdgst": false, 00:35:04.464 "ddgst": false 00:35:04.464 }, 00:35:04.464 "method": "bdev_nvme_attach_controller" 00:35:04.464 },{ 00:35:04.464 "params": { 00:35:04.464 "name": "Nvme1", 00:35:04.464 "trtype": "tcp", 00:35:04.464 "traddr": "10.0.0.2", 00:35:04.464 "adrfam": "ipv4", 00:35:04.464 "trsvcid": "4420", 00:35:04.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:04.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:04.464 "hdgst": false, 00:35:04.464 "ddgst": false 00:35:04.464 }, 00:35:04.464 "method": "bdev_nvme_attach_controller" 00:35:04.464 }' 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:04.464 13:19:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:04.464 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:04.464 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:04.464 fio-3.35 00:35:04.464 Starting 2 threads 00:35:14.422 00:35:14.422 filename0: (groupid=0, jobs=1): err= 0: pid=2248560: Fri Nov 29 13:19:14 2024 00:35:14.422 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:35:14.422 slat (nsec): min=6090, max=47276, avg=7928.67, stdev=3124.05 00:35:14.422 clat (usec): min=40756, max=43121, avg=40997.72, stdev=181.13 00:35:14.422 lat (usec): min=40762, max=43164, avg=41005.65, stdev=181.82 00:35:14.422 clat percentiles (usec): 00:35:14.422 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:14.422 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:14.422 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:14.422 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:14.422 | 99.99th=[43254] 00:35:14.422 bw ( KiB/s): min= 384, max= 416, per=33.81%, avg=388.80, stdev=11.72, samples=20 00:35:14.422 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:14.422 lat (msec) : 50=100.00% 00:35:14.422 cpu : usr=96.75%, sys=2.99%, ctx=61, majf=0, minf=141 00:35:14.422 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.422 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.422 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:14.422 filename1: (groupid=0, jobs=1): err= 0: pid=2248561: Fri Nov 29 13:19:14 2024 00:35:14.422 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10004msec) 00:35:14.422 slat (nsec): min=6101, max=43818, avg=7204.11, stdev=2236.99 00:35:14.422 clat (usec): min=421, max=42576, avg=21083.23, stdev=20514.61 00:35:14.422 lat (usec): min=427, max=42583, avg=21090.44, stdev=20513.96 00:35:14.422 clat percentiles (usec): 00:35:14.422 | 1.00th=[ 429], 5.00th=[ 433], 10.00th=[ 441], 20.00th=[ 449], 00:35:14.422 | 30.00th=[ 461], 40.00th=[ 562], 50.00th=[40633], 60.00th=[41157], 00:35:14.422 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:35:14.422 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:35:14.422 | 99.99th=[42730] 00:35:14.422 bw ( KiB/s): min= 672, max= 768, per=66.14%, avg=759.58, stdev=25.78, samples=19 00:35:14.422 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:35:14.422 lat (usec) : 500=37.29%, 750=11.92%, 1000=0.58% 00:35:14.422 lat (msec) : 50=50.21% 00:35:14.422 cpu : usr=96.74%, sys=3.00%, ctx=10, majf=0, minf=137 00:35:14.422 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.422 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.422 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:14.422 00:35:14.422 Run status group 0 (all jobs): 00:35:14.422 READ: bw=1148KiB/s (1175kB/s), 390KiB/s-758KiB/s (399kB/s-776kB/s), io=11.2MiB (11.8MB), run=10004-10010msec 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.681 00:35:14.681 real 0m11.624s 00:35:14.681 user 0m26.215s 00:35:14.681 sys 0m0.952s 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:14.681 13:19:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:14.681 ************************************ 00:35:14.681 END TEST fio_dif_1_multi_subsystems 00:35:14.681 ************************************ 00:35:14.681 13:19:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:14.681 13:19:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:14.681 13:19:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:14.681 13:19:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:14.681 ************************************ 00:35:14.681 START TEST fio_dif_rand_params 00:35:14.681 ************************************ 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.681 bdev_null0 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.681 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.682 [2024-11-29 13:19:14.430240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:14.682 { 00:35:14.682 "params": { 00:35:14.682 "name": "Nvme$subsystem", 00:35:14.682 "trtype": "$TEST_TRANSPORT", 00:35:14.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:14.682 "adrfam": "ipv4", 00:35:14.682 "trsvcid": "$NVMF_PORT", 00:35:14.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:14.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:14.682 "hdgst": ${hdgst:-false}, 00:35:14.682 "ddgst": ${ddgst:-false} 00:35:14.682 }, 00:35:14.682 "method": "bdev_nvme_attach_controller" 00:35:14.682 } 00:35:14.682 EOF 00:35:14.682 )") 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:14.682 "params": { 00:35:14.682 "name": "Nvme0", 00:35:14.682 "trtype": "tcp", 00:35:14.682 "traddr": "10.0.0.2", 00:35:14.682 "adrfam": "ipv4", 00:35:14.682 "trsvcid": "4420", 00:35:14.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.682 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.682 "hdgst": false, 00:35:14.682 "ddgst": false 00:35:14.682 }, 00:35:14.682 "method": "bdev_nvme_attach_controller" 00:35:14.682 }' 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:14.682 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:14.956 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:14.956 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:14.956 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:14.957 13:19:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.215 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:15.215 ... 00:35:15.215 fio-3.35 00:35:15.215 Starting 3 threads 00:35:21.785 00:35:21.785 filename0: (groupid=0, jobs=1): err= 0: pid=2250520: Fri Nov 29 13:19:20 2024 00:35:21.785 read: IOPS=288, BW=36.1MiB/s (37.9MB/s)(182MiB/5047msec) 00:35:21.785 slat (nsec): min=6372, max=39485, avg=11147.34, stdev=2320.10 00:35:21.785 clat (usec): min=4436, max=50967, avg=10341.31, stdev=7564.93 00:35:21.785 lat (usec): min=4442, max=50979, avg=10352.46, stdev=7564.88 00:35:21.785 clat percentiles (usec): 00:35:21.785 | 1.00th=[ 5604], 5.00th=[ 6390], 10.00th=[ 6849], 20.00th=[ 7767], 00:35:21.785 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:35:21.785 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10814], 95.00th=[11731], 00:35:21.785 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:35:21.785 | 99.99th=[51119] 00:35:21.785 bw ( KiB/s): min=29184, max=44800, per=33.06%, avg=37273.60, stdev=5424.14, samples=10 00:35:21.785 iops : min= 228, max= 350, avg=291.20, stdev=42.38, samples=10 00:35:21.785 lat (msec) : 10=75.10%, 20=21.26%, 50=3.16%, 100=0.48% 00:35:21.785 cpu : usr=94.07%, sys=5.65%, ctx=9, majf=0, minf=69 00:35:21.785 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:21.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.785 issued rwts: total=1458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:21.785 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:21.785 filename0: (groupid=0, jobs=1): err= 0: pid=2250521: Fri Nov 29 13:19:20 2024 00:35:21.785 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(187MiB/5003msec) 00:35:21.785 slat (nsec): min=6379, max=27961, avg=11494.28, stdev=2173.51 00:35:21.785 clat (usec): min=3449, max=52396, avg=10044.28, stdev=5755.01 00:35:21.785 lat (usec): min=3456, max=52408, avg=10055.77, stdev=5755.09 00:35:21.785 clat percentiles (usec): 00:35:21.785 | 1.00th=[ 4047], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 7504], 00:35:21.785 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10028], 00:35:21.785 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11863], 95.00th=[12780], 00:35:21.785 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49546], 99.95th=[52167], 00:35:21.785 | 99.99th=[52167] 00:35:21.785 bw ( KiB/s): min=30720, max=44544, per=33.81%, avg=38115.56, stdev=4246.14, samples=9 00:35:21.785 iops : min= 240, max= 348, avg=297.78, stdev=33.17, samples=9 00:35:21.785 lat (msec) : 4=0.87%, 10=59.18%, 20=37.94%, 50=1.94%, 100=0.07% 00:35:21.785 cpu : usr=94.36%, sys=5.34%, ctx=16, majf=0, minf=39 00:35:21.785 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:21.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.785 issued rwts: total=1492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:21.785 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:21.785 filename0: (groupid=0, jobs=1): err= 0: pid=2250522: Fri Nov 29 13:19:20 2024 00:35:21.785 read: IOPS=296, BW=37.1MiB/s (38.9MB/s)(187MiB/5043msec) 00:35:21.785 slat (nsec): min=6412, max=49628, avg=11269.67, stdev=2600.81 00:35:21.785 clat (usec): min=3266, max=52561, avg=10077.07, stdev=6573.57 00:35:21.785 lat (usec): min=3273, max=52574, avg=10088.34, stdev=6573.76 00:35:21.785 clat percentiles (usec): 00:35:21.785 | 1.00th=[ 3949], 5.00th=[ 4490], 10.00th=[ 5997], 20.00th=[ 7308], 00:35:21.785 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10028], 00:35:21.785 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11731], 95.00th=[12518], 00:35:21.785 | 99.00th=[49021], 99.50th=[49546], 99.90th=[51119], 99.95th=[52691], 00:35:21.785 | 99.99th=[52691] 00:35:21.785 bw ( KiB/s): min=28416, max=47104, per=33.90%, avg=38220.80, stdev=5672.00, samples=10 00:35:21.785 iops : min= 222, max= 368, avg=298.60, stdev=44.31, samples=10 00:35:21.785 lat (msec) : 4=1.14%, 10=59.53%, 20=36.79%, 50=2.34%, 100=0.20% 00:35:21.785 cpu : usr=94.09%, sys=5.61%, ctx=10, majf=0, minf=63 00:35:21.785 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:21.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:21.786 issued rwts: total=1495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:21.786 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:21.786 00:35:21.786 Run status group 0 (all jobs): 00:35:21.786 READ: bw=110MiB/s (115MB/s), 36.1MiB/s-37.3MiB/s (37.9MB/s-39.1MB/s), io=556MiB (583MB), run=5003-5047msec 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 bdev_null0 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 [2024-11-29 13:19:20.789587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 bdev_null1 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 bdev_null2 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:21.786 { 00:35:21.786 "params": { 00:35:21.786 "name": "Nvme$subsystem", 00:35:21.786 "trtype": "$TEST_TRANSPORT", 00:35:21.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:21.786 "adrfam": "ipv4", 00:35:21.786 "trsvcid": "$NVMF_PORT", 00:35:21.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:21.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:21.786 "hdgst": ${hdgst:-false}, 00:35:21.786 "ddgst": ${ddgst:-false} 00:35:21.786 }, 00:35:21.786 "method": "bdev_nvme_attach_controller" 00:35:21.786 } 00:35:21.786 EOF 00:35:21.786 )") 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:21.786 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:21.787 { 00:35:21.787 "params": { 00:35:21.787 "name": "Nvme$subsystem", 00:35:21.787 "trtype": "$TEST_TRANSPORT", 00:35:21.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:21.787 "adrfam": "ipv4", 00:35:21.787 "trsvcid": "$NVMF_PORT", 00:35:21.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:21.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:21.787 "hdgst": ${hdgst:-false}, 00:35:21.787 "ddgst": ${ddgst:-false} 00:35:21.787 }, 00:35:21.787 "method": "bdev_nvme_attach_controller" 00:35:21.787 } 00:35:21.787 EOF 00:35:21.787 )") 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:21.787 { 00:35:21.787 "params": { 00:35:21.787 "name": "Nvme$subsystem", 00:35:21.787 "trtype": "$TEST_TRANSPORT", 00:35:21.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:21.787 "adrfam": "ipv4", 00:35:21.787 "trsvcid": "$NVMF_PORT", 00:35:21.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:21.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:21.787 "hdgst": ${hdgst:-false}, 00:35:21.787 "ddgst": ${ddgst:-false} 00:35:21.787 }, 00:35:21.787 "method": "bdev_nvme_attach_controller" 00:35:21.787 } 00:35:21.787 EOF 00:35:21.787 )") 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:21.787 "params": { 00:35:21.787 "name": "Nvme0", 00:35:21.787 "trtype": "tcp", 00:35:21.787 "traddr": "10.0.0.2", 00:35:21.787 "adrfam": "ipv4", 00:35:21.787 "trsvcid": "4420", 00:35:21.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:21.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:21.787 "hdgst": false, 00:35:21.787 "ddgst": false 00:35:21.787 }, 00:35:21.787 "method": "bdev_nvme_attach_controller" 00:35:21.787 },{ 00:35:21.787 "params": { 00:35:21.787 "name": "Nvme1", 00:35:21.787 "trtype": "tcp", 00:35:21.787 "traddr": "10.0.0.2", 00:35:21.787 "adrfam": "ipv4", 00:35:21.787 "trsvcid": "4420", 00:35:21.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:21.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:21.787 "hdgst": false, 00:35:21.787 "ddgst": false 00:35:21.787 }, 00:35:21.787 "method": "bdev_nvme_attach_controller" 00:35:21.787 },{ 00:35:21.787 "params": { 00:35:21.787 "name": "Nvme2", 00:35:21.787 "trtype": "tcp", 00:35:21.787 "traddr": "10.0.0.2", 00:35:21.787 "adrfam": "ipv4", 00:35:21.787 "trsvcid": "4420", 00:35:21.787 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:21.787 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:21.787 "hdgst": false, 00:35:21.787 "ddgst": false 00:35:21.787 }, 00:35:21.787 "method": "bdev_nvme_attach_controller" 00:35:21.787 }' 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:21.787 13:19:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:21.787 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:21.787 ... 00:35:21.787 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:21.787 ... 00:35:21.787 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:21.787 ... 00:35:21.787 fio-3.35 00:35:21.787 Starting 24 threads 00:35:33.987 00:35:33.987 filename0: (groupid=0, jobs=1): err= 0: pid=2251584: Fri Nov 29 13:19:32 2024 00:35:33.987 read: IOPS=559, BW=2237KiB/s (2290kB/s)(21.9MiB/10015msec) 00:35:33.987 slat (nsec): min=7391, max=90058, avg=26838.83, stdev=10136.40 00:35:33.987 clat (usec): min=12266, max=29862, avg=28359.75, stdev=1169.27 00:35:33.987 lat (usec): min=12276, max=29893, avg=28386.59, stdev=1169.84 00:35:33.987 clat percentiles (usec): 00:35:33.987 | 1.00th=[25035], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:35:33.987 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.987 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.987 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:35:33.987 | 99.99th=[29754] 00:35:33.987 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2236.63, stdev=65.66, samples=19 00:35:33.987 iops : min= 544, max= 576, avg=559.16, stdev=16.42, samples=19 00:35:33.987 lat (msec) : 20=0.71%, 50=99.29% 00:35:33.987 cpu : usr=98.79%, sys=0.86%, ctx=14, majf=0, minf=9 00:35:33.987 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.987 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.987 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.987 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.987 filename0: (groupid=0, jobs=1): err= 0: pid=2251586: Fri Nov 29 13:19:32 2024 00:35:33.987 read: IOPS=556, BW=2226KiB/s (2280kB/s)(21.8MiB/10004msec) 00:35:33.987 slat (nsec): min=6983, max=76111, avg=18093.82, stdev=14965.93 00:35:33.987 clat (usec): min=9295, max=76078, avg=28564.04, stdev=2497.11 00:35:33.987 lat (usec): min=9309, max=76123, avg=28582.13, stdev=2497.44 00:35:33.987 clat percentiles (usec): 00:35:33.987 | 1.00th=[23725], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:35:33.987 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.987 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:35:33.987 | 99.00th=[29492], 99.50th=[39584], 99.90th=[63701], 99.95th=[63701], 00:35:33.987 | 99.99th=[76022] 00:35:33.987 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2216.42, stdev=73.20, samples=19 00:35:33.987 iops : min= 512, max= 576, avg=554.11, stdev=18.30, samples=19 00:35:33.987 lat (msec) : 10=0.29%, 20=0.45%, 50=98.98%, 100=0.29% 00:35:33.987 cpu : usr=98.51%, sys=1.11%, ctx=20, majf=0, minf=9 00:35:33.987 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:33.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.987 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.987 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.987 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.987 filename0: (groupid=0, jobs=1): err= 0: pid=2251587: Fri Nov 29 13:19:32 2024 00:35:33.987 read: IOPS=557, BW=2231KiB/s (2285kB/s)(21.8MiB/10007msec) 00:35:33.987 slat (usec): min=7, max=110, avg=45.06, stdev=17.21 00:35:33.987 clat (usec): min=6881, max=48811, avg=28287.27, stdev=1779.99 00:35:33.987 lat (usec): min=6893, max=48823, avg=28332.33, stdev=1779.57 00:35:33.987 clat percentiles (usec): 00:35:33.987 | 1.00th=[23725], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:35:33.987 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:35:33.987 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.987 | 99.00th=[29230], 99.50th=[33817], 99.90th=[49021], 99.95th=[49021], 00:35:33.987 | 99.99th=[49021] 00:35:33.987 bw ( KiB/s): min= 2052, max= 2304, per=4.16%, avg=2227.40, stdev=76.09, samples=20 00:35:33.987 iops : min= 513, max= 576, avg=556.85, stdev=19.02, samples=20 00:35:33.987 lat (msec) : 10=0.25%, 20=0.29%, 50=99.46% 00:35:33.988 cpu : usr=98.62%, sys=0.99%, ctx=15, majf=0, minf=9 00:35:33.988 IO depths : 1=5.9%, 2=12.1%, 4=24.7%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 issued rwts: total=5582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.988 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.988 filename0: (groupid=0, jobs=1): err= 0: pid=2251588: Fri Nov 29 13:19:32 2024 00:35:33.988 read: IOPS=558, BW=2235KiB/s (2289kB/s)(21.9MiB/10022msec) 00:35:33.988 slat (nsec): min=7886, max=82619, avg=25022.62, stdev=9763.48 00:35:33.988 clat (usec): min=12175, max=39796, avg=28432.05, stdev=1207.24 00:35:33.988 lat (usec): min=12191, max=39816, avg=28457.07, stdev=1206.70 00:35:33.988 clat percentiles (usec): 00:35:33.988 | 1.00th=[27132], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:35:33.988 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.988 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.988 | 99.00th=[29492], 99.50th=[29754], 99.90th=[39060], 99.95th=[39584], 00:35:33.988 | 99.99th=[39584] 00:35:33.988 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2233.60, stdev=65.33, samples=20 00:35:33.988 iops : min= 544, max= 576, avg=558.40, stdev=16.33, samples=20 00:35:33.988 lat (msec) : 20=0.68%, 50=99.32% 00:35:33.988 cpu : usr=98.55%, sys=1.09%, ctx=8, majf=0, minf=9 00:35:33.988 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.988 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.988 filename0: (groupid=0, jobs=1): err= 0: pid=2251589: Fri Nov 29 13:19:32 2024 00:35:33.988 read: IOPS=558, BW=2235KiB/s (2289kB/s)(21.9MiB/10022msec) 00:35:33.988 slat (nsec): min=8156, max=85350, avg=24301.79, stdev=9035.13 00:35:33.988 clat (usec): min=12363, max=29961, avg=28435.33, stdev=1078.01 00:35:33.988 lat (usec): min=12410, max=29979, avg=28459.64, stdev=1077.52 00:35:33.988 clat percentiles (usec): 00:35:33.988 | 1.00th=[27132], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:35:33.988 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.988 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.988 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29754], 99.95th=[30016], 00:35:33.988 | 99.99th=[30016] 00:35:33.988 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2233.60, stdev=65.33, samples=20 00:35:33.988 iops : min= 544, max= 576, avg=558.40, stdev=16.33, samples=20 00:35:33.988 lat (msec) : 20=0.57%, 50=99.43% 00:35:33.988 cpu : usr=98.41%, sys=1.24%, ctx=19, majf=0, minf=9 00:35:33.988 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.988 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.988 filename0: (groupid=0, jobs=1): err= 0: pid=2251590: Fri Nov 29 13:19:32 2024 00:35:33.988 read: IOPS=558, BW=2235KiB/s (2288kB/s)(21.9MiB/10024msec) 00:35:33.988 slat (nsec): min=7013, max=86667, avg=25892.52, stdev=10142.70 00:35:33.988 clat (usec): min=12325, max=45161, avg=28408.93, stdev=1174.84 00:35:33.988 lat (usec): min=12340, max=45175, avg=28434.82, stdev=1174.66 00:35:33.988 clat percentiles (usec): 00:35:33.988 | 1.00th=[26346], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:35:33.988 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.988 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.988 | 99.00th=[29754], 99.50th=[29754], 99.90th=[31065], 99.95th=[32113], 00:35:33.988 | 99.99th=[45351] 00:35:33.988 bw ( KiB/s): min= 2176, max= 2320, per=4.17%, avg=2233.60, stdev=62.60, samples=20 00:35:33.988 iops : min= 544, max= 580, avg=558.40, stdev=15.65, samples=20 00:35:33.988 lat (msec) : 20=0.64%, 50=99.36% 00:35:33.988 cpu : usr=98.45%, sys=1.19%, ctx=10, majf=0, minf=9 00:35:33.988 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.988 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.988 filename0: (groupid=0, jobs=1): err= 0: pid=2251591: Fri Nov 29 13:19:32 2024 00:35:33.988 read: IOPS=557, BW=2231KiB/s (2284kB/s)(21.8MiB/10013msec) 00:35:33.988 slat (nsec): min=6234, max=82966, avg=26094.21, stdev=10623.12 00:35:33.988 clat (usec): min=15253, max=31984, avg=28434.40, stdev=785.23 00:35:33.988 lat (usec): min=15261, max=32001, avg=28460.50, stdev=785.74 00:35:33.988 clat percentiles (usec): 00:35:33.988 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:35:33.988 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.988 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.988 | 99.00th=[29230], 99.50th=[29492], 99.90th=[31851], 99.95th=[31851], 00:35:33.988 | 99.99th=[32113] 00:35:33.988 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2225.80, stdev=62.86, samples=20 00:35:33.988 iops : min= 544, max= 576, avg=556.45, stdev=15.72, samples=20 00:35:33.988 lat (msec) : 20=0.29%, 50=99.71% 00:35:33.988 cpu : usr=98.45%, sys=1.18%, ctx=10, majf=0, minf=9 00:35:33.988 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.988 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.988 filename0: (groupid=0, jobs=1): err= 0: pid=2251592: Fri Nov 29 13:19:32 2024 00:35:33.988 read: IOPS=562, BW=2251KiB/s (2305kB/s)(22.0MiB/10006msec) 00:35:33.988 slat (nsec): min=4334, max=76053, avg=15624.23, stdev=12494.37 00:35:33.988 clat (usec): min=3666, max=29883, avg=28281.93, stdev=2244.47 00:35:33.988 lat (usec): min=3674, max=29898, avg=28297.56, stdev=2244.43 00:35:33.988 clat percentiles (usec): 00:35:33.988 | 1.00th=[15926], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:33.988 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:35:33.988 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[29230], 00:35:33.988 | 99.00th=[29492], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:35:33.988 | 99.99th=[29754] 00:35:33.988 bw ( KiB/s): min= 2176, max= 2688, per=4.19%, avg=2246.40, stdev=120.90, samples=20 00:35:33.988 iops : min= 544, max= 672, avg=561.60, stdev=30.22, samples=20 00:35:33.988 lat (msec) : 4=0.12%, 10=0.44%, 20=1.14%, 50=98.30% 00:35:33.988 cpu : usr=98.35%, sys=1.30%, ctx=9, majf=0, minf=9 00:35:33.988 IO depths : 1=6.2%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.988 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.988 filename1: (groupid=0, jobs=1): err= 0: pid=2251593: Fri Nov 29 13:19:32 2024 00:35:33.988 read: IOPS=556, BW=2225KiB/s (2279kB/s)(21.8MiB/10009msec) 00:35:33.988 slat (nsec): min=5732, max=84870, avg=22097.90, stdev=11321.20 00:35:33.988 clat (usec): min=15810, max=53240, avg=28581.09, stdev=1544.20 00:35:33.988 lat (usec): min=15826, max=53255, avg=28603.18, stdev=1543.09 00:35:33.988 clat percentiles (usec): 00:35:33.988 | 1.00th=[27657], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:35:33.988 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:35:33.988 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:35:33.988 | 99.00th=[29230], 99.50th=[32900], 99.90th=[53216], 99.95th=[53216], 00:35:33.988 | 99.99th=[53216] 00:35:33.988 bw ( KiB/s): min= 2052, max= 2304, per=4.15%, avg=2225.90, stdev=73.51, samples=20 00:35:33.988 iops : min= 513, max= 576, avg=556.45, stdev=18.36, samples=20 00:35:33.988 lat (msec) : 20=0.29%, 50=99.43%, 100=0.29% 00:35:33.988 cpu : usr=98.44%, sys=1.20%, ctx=15, majf=0, minf=9 00:35:33.988 IO depths : 1=4.1%, 2=10.3%, 4=24.8%, 8=52.4%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.988 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.988 filename1: (groupid=0, jobs=1): err= 0: pid=2251594: Fri Nov 29 13:19:32 2024 00:35:33.988 read: IOPS=556, BW=2225KiB/s (2279kB/s)(21.8MiB/10008msec) 00:35:33.988 slat (nsec): min=7122, max=85243, avg=22914.90, stdev=10395.49 00:35:33.988 clat (usec): min=23194, max=45701, avg=28569.88, stdev=1056.46 00:35:33.988 lat (usec): min=23220, max=45726, avg=28592.80, stdev=1055.03 00:35:33.988 clat percentiles (usec): 00:35:33.988 | 1.00th=[27657], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:35:33.988 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.988 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:35:33.988 | 99.00th=[29230], 99.50th=[33162], 99.90th=[45876], 99.95th=[45876], 00:35:33.988 | 99.99th=[45876] 00:35:33.988 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2223.16, stdev=76.45, samples=19 00:35:33.988 iops : min= 512, max= 576, avg=555.79, stdev=19.11, samples=19 00:35:33.988 lat (msec) : 50=100.00% 00:35:33.988 cpu : usr=98.54%, sys=1.10%, ctx=15, majf=0, minf=9 00:35:33.988 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.988 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.988 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.988 filename1: (groupid=0, jobs=1): err= 0: pid=2251595: Fri Nov 29 13:19:32 2024 00:35:33.988 read: IOPS=558, BW=2235KiB/s (2288kB/s)(21.9MiB/10023msec) 00:35:33.988 slat (nsec): min=11404, max=83116, avg=26969.39, stdev=9526.96 00:35:33.988 clat (usec): min=12321, max=29921, avg=28400.04, stdev=1077.07 00:35:33.988 lat (usec): min=12350, max=29943, avg=28427.01, stdev=1076.87 00:35:33.988 clat percentiles (usec): 00:35:33.989 | 1.00th=[27132], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:35:33.989 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.989 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.989 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:35:33.989 | 99.99th=[30016] 00:35:33.989 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2233.60, stdev=65.33, samples=20 00:35:33.989 iops : min= 544, max= 576, avg=558.40, stdev=16.33, samples=20 00:35:33.989 lat (msec) : 20=0.57%, 50=99.43% 00:35:33.989 cpu : usr=98.46%, sys=1.17%, ctx=64, majf=0, minf=9 00:35:33.989 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.989 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.989 filename1: (groupid=0, jobs=1): err= 0: pid=2251596: Fri Nov 29 13:19:32 2024 00:35:33.989 read: IOPS=557, BW=2231KiB/s (2285kB/s)(21.8MiB/10011msec) 00:35:33.989 slat (nsec): min=4376, max=84879, avg=25349.43, stdev=10718.48 00:35:33.989 clat (usec): min=15856, max=45635, avg=28438.28, stdev=1078.62 00:35:33.989 lat (usec): min=15871, max=45656, avg=28463.63, stdev=1078.96 00:35:33.989 clat percentiles (usec): 00:35:33.989 | 1.00th=[25822], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:35:33.989 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.989 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.989 | 99.00th=[29230], 99.50th=[30802], 99.90th=[45351], 99.95th=[45351], 00:35:33.989 | 99.99th=[45876] 00:35:33.989 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2223.16, stdev=63.44, samples=19 00:35:33.989 iops : min= 544, max= 576, avg=555.79, stdev=15.86, samples=19 00:35:33.989 lat (msec) : 20=0.39%, 50=99.61% 00:35:33.989 cpu : usr=98.47%, sys=1.18%, ctx=16, majf=0, minf=9 00:35:33.989 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:33.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 issued rwts: total=5584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.989 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.989 filename1: (groupid=0, jobs=1): err= 0: pid=2251597: Fri Nov 29 13:19:32 2024 00:35:33.989 read: IOPS=559, BW=2239KiB/s (2293kB/s)(21.9MiB/10004msec) 00:35:33.989 slat (nsec): min=5970, max=88143, avg=22668.05, stdev=10841.20 00:35:33.989 clat (usec): min=15288, max=48428, avg=28384.29, stdev=2027.91 00:35:33.989 lat (usec): min=15304, max=48445, avg=28406.96, stdev=2027.86 00:35:33.989 clat percentiles (usec): 00:35:33.989 | 1.00th=[21103], 5.00th=[26084], 10.00th=[28181], 20.00th=[28443], 00:35:33.989 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.989 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.989 | 99.00th=[34866], 99.50th=[34866], 99.90th=[48497], 99.95th=[48497], 00:35:33.989 | 99.99th=[48497] 00:35:33.989 bw ( KiB/s): min= 2052, max= 2400, per=4.16%, avg=2230.11, stdev=82.35, samples=19 00:35:33.989 iops : min= 513, max= 600, avg=557.53, stdev=20.59, samples=19 00:35:33.989 lat (msec) : 20=0.82%, 50=99.18% 00:35:33.989 cpu : usr=98.46%, sys=1.10%, ctx=34, majf=0, minf=9 00:35:33.989 IO depths : 1=5.0%, 2=10.1%, 4=20.8%, 8=55.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:35:33.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 complete : 0=0.0%, 4=93.1%, 8=1.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.989 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.989 filename1: (groupid=0, jobs=1): err= 0: pid=2251598: Fri Nov 29 13:19:32 2024 00:35:33.989 read: IOPS=556, BW=2225KiB/s (2279kB/s)(21.8MiB/10009msec) 00:35:33.989 slat (nsec): min=5981, max=83778, avg=24769.60, stdev=9687.34 00:35:33.989 clat (usec): min=15839, max=54307, avg=28528.54, stdev=1555.40 00:35:33.989 lat (usec): min=15861, max=54320, avg=28553.31, stdev=1554.65 00:35:33.989 clat percentiles (usec): 00:35:33.989 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:35:33.989 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.989 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.989 | 99.00th=[29230], 99.50th=[29492], 99.90th=[54264], 99.95th=[54264], 00:35:33.989 | 99.99th=[54264] 00:35:33.989 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2225.70, stdev=75.27, samples=20 00:35:33.989 iops : min= 512, max= 576, avg=556.40, stdev=18.80, samples=20 00:35:33.989 lat (msec) : 20=0.29%, 50=99.43%, 100=0.29% 00:35:33.989 cpu : usr=98.42%, sys=1.23%, ctx=13, majf=0, minf=9 00:35:33.989 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:33.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.989 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.989 filename1: (groupid=0, jobs=1): err= 0: pid=2251599: Fri Nov 29 13:19:32 2024 00:35:33.989 read: IOPS=556, BW=2225KiB/s (2279kB/s)(21.8MiB/10008msec) 00:35:33.989 slat (nsec): min=6416, max=84154, avg=24713.01, stdev=9808.87 00:35:33.989 clat (usec): min=15857, max=53357, avg=28520.42, stdev=1504.86 00:35:33.989 lat (usec): min=15877, max=53371, avg=28545.14, stdev=1504.19 00:35:33.989 clat percentiles (usec): 00:35:33.989 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:35:33.989 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.989 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.989 | 99.00th=[29230], 99.50th=[29492], 99.90th=[53216], 99.95th=[53216], 00:35:33.989 | 99.99th=[53216] 00:35:33.989 bw ( KiB/s): min= 2052, max= 2304, per=4.15%, avg=2225.90, stdev=74.78, samples=20 00:35:33.989 iops : min= 513, max= 576, avg=556.45, stdev=18.68, samples=20 00:35:33.989 lat (msec) : 20=0.29%, 50=99.43%, 100=0.29% 00:35:33.989 cpu : usr=98.50%, sys=1.15%, ctx=14, majf=0, minf=9 00:35:33.989 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.989 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.989 filename1: (groupid=0, jobs=1): err= 0: pid=2251600: Fri Nov 29 13:19:32 2024 00:35:33.989 read: IOPS=563, BW=2254KiB/s (2308kB/s)(22.0MiB/10007msec) 00:35:33.989 slat (nsec): min=6349, max=86033, avg=21453.58, stdev=10940.16 00:35:33.989 clat (usec): min=6491, max=48817, avg=28207.79, stdev=2753.37 00:35:33.989 lat (usec): min=6497, max=48833, avg=28229.24, stdev=2753.77 00:35:33.989 clat percentiles (usec): 00:35:33.989 | 1.00th=[17433], 5.00th=[23200], 10.00th=[27919], 20.00th=[28181], 00:35:33.989 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.989 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[29230], 00:35:33.989 | 99.00th=[34866], 99.50th=[36963], 99.90th=[49021], 99.95th=[49021], 00:35:33.989 | 99.99th=[49021] 00:35:33.989 bw ( KiB/s): min= 2048, max= 2464, per=4.19%, avg=2248.80, stdev=88.78, samples=20 00:35:33.989 iops : min= 512, max= 616, avg=562.20, stdev=22.19, samples=20 00:35:33.989 lat (msec) : 10=0.28%, 20=1.45%, 50=98.26% 00:35:33.989 cpu : usr=98.63%, sys=1.01%, ctx=6, majf=0, minf=9 00:35:33.989 IO depths : 1=4.7%, 2=9.4%, 4=19.6%, 8=57.6%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:33.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 complete : 0=0.0%, 4=92.8%, 8=2.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 issued rwts: total=5638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.989 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.989 filename2: (groupid=0, jobs=1): err= 0: pid=2251601: Fri Nov 29 13:19:32 2024 00:35:33.989 read: IOPS=558, BW=2235KiB/s (2288kB/s)(21.9MiB/10023msec) 00:35:33.989 slat (nsec): min=9143, max=90542, avg=26991.45, stdev=9928.55 00:35:33.989 clat (usec): min=11430, max=29902, avg=28400.38, stdev=1081.81 00:35:33.989 lat (usec): min=11450, max=29917, avg=28427.37, stdev=1081.59 00:35:33.989 clat percentiles (usec): 00:35:33.989 | 1.00th=[27132], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:35:33.989 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.989 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.989 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:35:33.989 | 99.99th=[30016] 00:35:33.989 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2233.60, stdev=65.33, samples=20 00:35:33.989 iops : min= 544, max= 576, avg=558.40, stdev=16.33, samples=20 00:35:33.989 lat (msec) : 20=0.57%, 50=99.43% 00:35:33.989 cpu : usr=98.43%, sys=1.21%, ctx=14, majf=0, minf=9 00:35:33.989 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:33.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.989 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.989 filename2: (groupid=0, jobs=1): err= 0: pid=2251602: Fri Nov 29 13:19:32 2024 00:35:33.989 read: IOPS=560, BW=2242KiB/s (2296kB/s)(21.9MiB/10005msec) 00:35:33.989 slat (nsec): min=3845, max=83677, avg=23727.62, stdev=10824.44 00:35:33.989 clat (usec): min=16352, max=45381, avg=28339.89, stdev=1969.92 00:35:33.989 lat (usec): min=16360, max=45418, avg=28363.62, stdev=1970.88 00:35:33.989 clat percentiles (usec): 00:35:33.989 | 1.00th=[21103], 5.00th=[26346], 10.00th=[28181], 20.00th=[28181], 00:35:33.989 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.989 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.989 | 99.00th=[34341], 99.50th=[35914], 99.90th=[44827], 99.95th=[45351], 00:35:33.989 | 99.99th=[45351] 00:35:33.989 bw ( KiB/s): min= 2048, max= 2464, per=4.18%, avg=2240.00, stdev=95.26, samples=19 00:35:33.989 iops : min= 512, max= 616, avg=560.00, stdev=23.81, samples=19 00:35:33.989 lat (msec) : 20=0.82%, 50=99.18% 00:35:33.989 cpu : usr=98.45%, sys=1.19%, ctx=15, majf=0, minf=9 00:35:33.989 IO depths : 1=5.3%, 2=10.7%, 4=22.2%, 8=54.2%, 16=7.6%, 32=0.0%, >=64=0.0% 00:35:33.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.989 issued rwts: total=5608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.990 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.990 filename2: (groupid=0, jobs=1): err= 0: pid=2251603: Fri Nov 29 13:19:32 2024 00:35:33.990 read: IOPS=556, BW=2225KiB/s (2279kB/s)(21.8MiB/10009msec) 00:35:33.990 slat (nsec): min=5443, max=85041, avg=24258.57, stdev=9930.25 00:35:33.990 clat (usec): min=15951, max=54207, avg=28550.16, stdev=1615.01 00:35:33.990 lat (usec): min=15981, max=54222, avg=28574.42, stdev=1614.07 00:35:33.990 clat percentiles (usec): 00:35:33.990 | 1.00th=[27657], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:35:33.990 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.990 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.990 | 99.00th=[29230], 99.50th=[33817], 99.90th=[54264], 99.95th=[54264], 00:35:33.990 | 99.99th=[54264] 00:35:33.990 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2225.70, stdev=75.27, samples=20 00:35:33.990 iops : min= 512, max= 576, avg=556.40, stdev=18.80, samples=20 00:35:33.990 lat (msec) : 20=0.29%, 50=99.43%, 100=0.29% 00:35:33.990 cpu : usr=98.39%, sys=1.26%, ctx=16, majf=0, minf=9 00:35:33.990 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:33.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.990 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.990 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.990 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.990 filename2: (groupid=0, jobs=1): err= 0: pid=2251604: Fri Nov 29 13:19:32 2024 00:35:33.990 read: IOPS=558, BW=2235KiB/s (2288kB/s)(21.9MiB/10023msec) 00:35:33.990 slat (nsec): min=11406, max=83448, avg=26919.05, stdev=9955.95 00:35:33.990 clat (usec): min=12285, max=29873, avg=28392.38, stdev=1078.00 00:35:33.990 lat (usec): min=12301, max=29895, avg=28419.30, stdev=1077.90 00:35:33.990 clat percentiles (usec): 00:35:33.990 | 1.00th=[27132], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:35:33.990 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:35:33.990 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:35:33.990 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:35:33.990 | 99.99th=[29754] 00:35:33.990 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2233.60, stdev=65.33, samples=20 00:35:33.990 iops : min= 544, max= 576, avg=558.40, stdev=16.33, samples=20 00:35:33.990 lat (msec) : 20=0.57%, 50=99.43% 00:35:33.990 cpu : usr=98.58%, sys=1.06%, ctx=10, majf=0, minf=9 00:35:33.990 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:33.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.990 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.990 issued rwts: total=5600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.990 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.990 filename2: (groupid=0, jobs=1): err= 0: pid=2251606: Fri Nov 29 13:19:32 2024 00:35:33.990 read: IOPS=560, BW=2241KiB/s (2295kB/s)(21.9MiB/10022msec) 00:35:33.990 slat (nsec): min=3345, max=90386, avg=22236.13, stdev=10359.12 00:35:33.990 clat (usec): min=3159, max=30396, avg=28377.49, stdev=1722.40 00:35:33.990 lat (usec): min=3167, max=30408, avg=28399.73, stdev=1722.44 00:35:33.990 clat percentiles (usec): 00:35:33.990 | 1.00th=[18482], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:35:33.990 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:35:33.990 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:35:33.990 | 99.00th=[29492], 99.50th=[29492], 99.90th=[30278], 99.95th=[30278], 00:35:33.990 | 99.99th=[30278] 00:35:33.990 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2240.00, stdev=77.69, samples=20 00:35:33.990 iops : min= 544, max= 608, avg=560.00, stdev=19.42, samples=20 00:35:33.990 lat (msec) : 4=0.16%, 10=0.12%, 20=0.85%, 50=98.86% 00:35:33.990 cpu : usr=98.21%, sys=1.43%, ctx=15, majf=0, minf=9 00:35:33.990 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:33.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.990 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.990 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.990 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.990 filename2: (groupid=0, jobs=1): err= 0: pid=2251607: Fri Nov 29 13:19:32 2024 00:35:33.990 read: IOPS=566, BW=2265KiB/s (2319kB/s)(22.2MiB/10025msec) 00:35:33.990 slat (nsec): min=4268, max=73398, avg=14260.63, stdev=8097.02 00:35:33.990 clat (usec): min=2408, max=35825, avg=28139.80, stdev=2960.97 00:35:33.990 lat (usec): min=2416, max=35832, avg=28154.06, stdev=2961.56 00:35:33.990 clat percentiles (usec): 00:35:33.990 | 1.00th=[10945], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:35:33.990 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28705], 60.00th=[28705], 00:35:33.990 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:35:33.990 | 99.00th=[29492], 99.50th=[29492], 99.90th=[29754], 99.95th=[35914], 00:35:33.990 | 99.99th=[35914] 00:35:33.990 bw ( KiB/s): min= 2176, max= 2912, per=4.22%, avg=2264.00, stdev=165.10, samples=20 00:35:33.990 iops : min= 544, max= 728, avg=566.00, stdev=41.27, samples=20 00:35:33.990 lat (msec) : 4=0.81%, 10=0.14%, 20=1.48%, 50=97.57% 00:35:33.990 cpu : usr=98.29%, sys=1.32%, ctx=31, majf=0, minf=9 00:35:33.990 IO depths : 1=6.1%, 2=12.1%, 4=24.4%, 8=50.9%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:33.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.990 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.990 issued rwts: total=5676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.990 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.990 filename2: (groupid=0, jobs=1): err= 0: pid=2251608: Fri Nov 29 13:19:32 2024 00:35:33.990 read: IOPS=556, BW=2226KiB/s (2279kB/s)(21.8MiB/10006msec) 00:35:33.990 slat (nsec): min=6181, max=99194, avg=43430.65, stdev=19293.97 00:35:33.990 clat (usec): min=9311, max=77668, avg=28354.44, stdev=2583.81 00:35:33.990 lat (usec): min=9325, max=77684, avg=28397.87, stdev=2582.51 00:35:33.990 clat percentiles (usec): 00:35:33.990 | 1.00th=[23725], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:35:33.990 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:35:33.990 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:35:33.990 | 99.00th=[29492], 99.50th=[39584], 99.90th=[64750], 99.95th=[65274], 00:35:33.990 | 99.99th=[78119] 00:35:33.990 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2221.00, stdev=74.67, samples=20 00:35:33.990 iops : min= 513, max= 576, avg=555.25, stdev=18.67, samples=20 00:35:33.990 lat (msec) : 10=0.29%, 20=0.47%, 50=98.96%, 100=0.29% 00:35:33.990 cpu : usr=98.59%, sys=1.03%, ctx=10, majf=0, minf=9 00:35:33.990 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:35:33.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.990 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.990 issued rwts: total=5568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.990 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.990 filename2: (groupid=0, jobs=1): err= 0: pid=2251609: Fri Nov 29 13:19:32 2024 00:35:33.990 read: IOPS=560, BW=2242KiB/s (2295kB/s)(21.9MiB/10009msec) 00:35:33.990 slat (nsec): min=5236, max=80486, avg=20143.40, stdev=11499.35 00:35:33.990 clat (usec): min=9488, max=51390, avg=28385.99, stdev=2911.15 00:35:33.990 lat (usec): min=9497, max=51406, avg=28406.13, stdev=2911.49 00:35:33.990 clat percentiles (usec): 00:35:33.990 | 1.00th=[16712], 5.00th=[27395], 10.00th=[28181], 20.00th=[28443], 00:35:33.990 | 30.00th=[28443], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:35:33.990 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:35:33.990 | 99.00th=[40633], 99.50th=[41157], 99.90th=[51119], 99.95th=[51119], 00:35:33.990 | 99.99th=[51643] 00:35:33.990 bw ( KiB/s): min= 2048, max= 2456, per=4.17%, avg=2237.20, stdev=90.41, samples=20 00:35:33.990 iops : min= 512, max= 614, avg=559.30, stdev=22.60, samples=20 00:35:33.990 lat (msec) : 10=0.25%, 20=2.26%, 50=97.20%, 100=0.29% 00:35:33.990 cpu : usr=98.29%, sys=1.24%, ctx=98, majf=0, minf=9 00:35:33.990 IO depths : 1=4.7%, 2=10.6%, 4=23.6%, 8=53.3%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:33.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.990 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:33.990 issued rwts: total=5609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:33.990 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:33.990 00:35:33.990 Run status group 0 (all jobs): 00:35:33.990 READ: bw=52.3MiB/s (54.9MB/s), 2225KiB/s-2265KiB/s (2279kB/s-2319kB/s), io=525MiB (550MB), run=10004-10025msec 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.990 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.991 bdev_null0 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.991 [2024-11-29 13:19:32.403306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.991 bdev_null1 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:33.991 { 00:35:33.991 "params": { 00:35:33.991 "name": "Nvme$subsystem", 00:35:33.991 "trtype": "$TEST_TRANSPORT", 00:35:33.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:33.991 "adrfam": "ipv4", 00:35:33.991 "trsvcid": "$NVMF_PORT", 00:35:33.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:33.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:33.991 "hdgst": ${hdgst:-false}, 00:35:33.991 "ddgst": ${ddgst:-false} 00:35:33.991 }, 00:35:33.991 "method": "bdev_nvme_attach_controller" 00:35:33.991 } 00:35:33.991 EOF 00:35:33.991 )") 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:33.991 { 00:35:33.991 "params": { 00:35:33.991 "name": "Nvme$subsystem", 00:35:33.991 "trtype": "$TEST_TRANSPORT", 00:35:33.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:33.991 "adrfam": "ipv4", 00:35:33.991 "trsvcid": "$NVMF_PORT", 00:35:33.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:33.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:33.991 "hdgst": ${hdgst:-false}, 00:35:33.991 "ddgst": ${ddgst:-false} 00:35:33.991 }, 00:35:33.991 "method": "bdev_nvme_attach_controller" 00:35:33.991 } 00:35:33.991 EOF 00:35:33.991 )") 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:33.991 13:19:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:33.991 "params": { 00:35:33.991 "name": "Nvme0", 00:35:33.991 "trtype": "tcp", 00:35:33.991 "traddr": "10.0.0.2", 00:35:33.991 "adrfam": "ipv4", 00:35:33.991 "trsvcid": "4420", 00:35:33.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:33.991 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:33.991 "hdgst": false, 00:35:33.991 "ddgst": false 00:35:33.991 }, 00:35:33.991 "method": "bdev_nvme_attach_controller" 00:35:33.991 },{ 00:35:33.991 "params": { 00:35:33.991 "name": "Nvme1", 00:35:33.991 "trtype": "tcp", 00:35:33.991 "traddr": "10.0.0.2", 00:35:33.991 "adrfam": "ipv4", 00:35:33.991 "trsvcid": "4420", 00:35:33.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:33.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:33.992 "hdgst": false, 00:35:33.992 "ddgst": false 00:35:33.992 }, 00:35:33.992 "method": "bdev_nvme_attach_controller" 00:35:33.992 }' 00:35:33.992 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:33.992 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:33.992 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:33.992 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:33.992 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:33.992 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:33.992 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:33.992 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:33.992 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:33.992 13:19:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.992 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:33.992 ... 00:35:33.992 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:33.992 ... 00:35:33.992 fio-3.35 00:35:33.992 Starting 4 threads 00:35:39.410 00:35:39.410 filename0: (groupid=0, jobs=1): err= 0: pid=2253546: Fri Nov 29 13:19:38 2024 00:35:39.410 read: IOPS=2494, BW=19.5MiB/s (20.4MB/s)(97.5MiB/5001msec) 00:35:39.410 slat (nsec): min=6255, max=57443, avg=9124.73, stdev=3162.18 00:35:39.410 clat (usec): min=1120, max=5996, avg=3180.25, stdev=544.07 00:35:39.410 lat (usec): min=1131, max=6003, avg=3189.37, stdev=543.71 00:35:39.410 clat percentiles (usec): 00:35:39.410 | 1.00th=[ 2212], 5.00th=[ 2540], 10.00th=[ 2671], 20.00th=[ 2835], 00:35:39.410 | 30.00th=[ 2900], 40.00th=[ 3032], 50.00th=[ 3097], 60.00th=[ 3130], 00:35:39.410 | 70.00th=[ 3195], 80.00th=[ 3392], 90.00th=[ 3916], 95.00th=[ 4424], 00:35:39.410 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 5538], 99.95th=[ 5669], 00:35:39.410 | 99.99th=[ 5997] 00:35:39.410 bw ( KiB/s): min=19104, max=20912, per=24.45%, avg=20006.22, stdev=536.71, samples=9 00:35:39.410 iops : min= 2388, max= 2614, avg=2500.78, stdev=67.09, samples=9 00:35:39.410 lat (msec) : 2=0.29%, 4=90.61%, 10=9.10% 00:35:39.410 cpu : usr=95.52%, sys=4.18%, ctx=9, majf=0, minf=9 00:35:39.410 IO depths : 1=0.2%, 2=2.9%, 4=69.1%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.410 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.410 issued rwts: total=12475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.410 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:39.410 filename0: (groupid=0, jobs=1): err= 0: pid=2253547: Fri Nov 29 13:19:38 2024 00:35:39.410 read: IOPS=2479, BW=19.4MiB/s (20.3MB/s)(96.9MiB/5002msec) 00:35:39.410 slat (nsec): min=6349, max=46298, avg=9148.61, stdev=3250.02 00:35:39.410 clat (usec): min=728, max=6039, avg=3201.04, stdev=554.50 00:35:39.410 lat (usec): min=735, max=6046, avg=3210.18, stdev=554.13 00:35:39.410 clat percentiles (usec): 00:35:39.410 | 1.00th=[ 2245], 5.00th=[ 2573], 10.00th=[ 2704], 20.00th=[ 2835], 00:35:39.410 | 30.00th=[ 2900], 40.00th=[ 3032], 50.00th=[ 3097], 60.00th=[ 3163], 00:35:39.410 | 70.00th=[ 3228], 80.00th=[ 3425], 90.00th=[ 3916], 95.00th=[ 4490], 00:35:39.410 | 99.00th=[ 5080], 99.50th=[ 5276], 99.90th=[ 5669], 99.95th=[ 5800], 00:35:39.410 | 99.99th=[ 6063] 00:35:39.410 bw ( KiB/s): min=18368, max=20560, per=24.11%, avg=19726.22, stdev=734.80, samples=9 00:35:39.410 iops : min= 2296, max= 2570, avg=2465.78, stdev=91.85, samples=9 00:35:39.410 lat (usec) : 750=0.02% 00:35:39.410 lat (msec) : 2=0.40%, 4=90.30%, 10=9.28% 00:35:39.410 cpu : usr=96.26%, sys=3.40%, ctx=38, majf=0, minf=9 00:35:39.410 IO depths : 1=0.1%, 2=2.7%, 4=68.3%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.410 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.410 issued rwts: total=12400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.410 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:39.410 filename1: (groupid=0, jobs=1): err= 0: pid=2253548: Fri Nov 29 13:19:38 2024 00:35:39.410 read: IOPS=2516, BW=19.7MiB/s (20.6MB/s)(98.3MiB/5001msec) 00:35:39.410 slat (nsec): min=6289, max=62389, avg=9301.06, stdev=3316.02 00:35:39.410 clat (usec): min=865, max=5826, avg=3151.57, stdev=550.81 00:35:39.410 lat (usec): min=872, max=5833, avg=3160.87, stdev=550.66 00:35:39.410 clat percentiles (usec): 00:35:39.410 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2802], 00:35:39.410 | 30.00th=[ 2900], 40.00th=[ 2999], 50.00th=[ 3097], 60.00th=[ 3130], 00:35:39.410 | 70.00th=[ 3195], 80.00th=[ 3392], 90.00th=[ 3884], 95.00th=[ 4359], 00:35:39.410 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 5604], 99.95th=[ 5669], 00:35:39.410 | 99.99th=[ 5800] 00:35:39.410 bw ( KiB/s): min=18704, max=21040, per=24.56%, avg=20099.56, stdev=801.19, samples=9 00:35:39.410 iops : min= 2338, max= 2630, avg=2512.44, stdev=100.15, samples=9 00:35:39.410 lat (usec) : 1000=0.02% 00:35:39.410 lat (msec) : 2=0.41%, 4=90.89%, 10=8.68% 00:35:39.410 cpu : usr=96.10%, sys=3.56%, ctx=12, majf=0, minf=9 00:35:39.410 IO depths : 1=0.1%, 2=3.6%, 4=68.6%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.410 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.410 issued rwts: total=12585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.410 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:39.410 filename1: (groupid=0, jobs=1): err= 0: pid=2253549: Fri Nov 29 13:19:38 2024 00:35:39.410 read: IOPS=2740, BW=21.4MiB/s (22.4MB/s)(107MiB/5003msec) 00:35:39.410 slat (nsec): min=6299, max=44959, avg=9181.05, stdev=3101.57 00:35:39.410 clat (usec): min=841, max=5453, avg=2891.11, stdev=533.91 00:35:39.410 lat (usec): min=862, max=5467, avg=2900.29, stdev=533.79 00:35:39.410 clat percentiles (usec): 00:35:39.410 | 1.00th=[ 1598], 5.00th=[ 2147], 10.00th=[ 2343], 20.00th=[ 2540], 00:35:39.410 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2999], 00:35:39.410 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3982], 00:35:39.410 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5211], 99.95th=[ 5342], 00:35:39.410 | 99.99th=[ 5407] 00:35:39.410 bw ( KiB/s): min=20864, max=23424, per=26.85%, avg=21966.22, stdev=877.07, samples=9 00:35:39.410 iops : min= 2608, max= 2928, avg=2745.78, stdev=109.63, samples=9 00:35:39.410 lat (usec) : 1000=0.15% 00:35:39.410 lat (msec) : 2=2.39%, 4=92.54%, 10=4.92% 00:35:39.410 cpu : usr=95.98%, sys=3.68%, ctx=9, majf=0, minf=9 00:35:39.410 IO depths : 1=0.2%, 2=7.4%, 4=64.8%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.410 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.410 issued rwts: total=13709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.410 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:39.410 00:35:39.410 Run status group 0 (all jobs): 00:35:39.410 READ: bw=79.9MiB/s (83.8MB/s), 19.4MiB/s-21.4MiB/s (20.3MB/s-22.4MB/s), io=400MiB (419MB), run=5001-5003msec 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.410 00:35:39.410 real 0m24.358s 00:35:39.410 user 4m51.804s 00:35:39.410 sys 0m5.390s 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:39.410 13:19:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.410 ************************************ 00:35:39.410 END TEST fio_dif_rand_params 00:35:39.410 ************************************ 00:35:39.410 13:19:38 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:39.410 13:19:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:39.410 13:19:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:39.410 13:19:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:39.410 ************************************ 00:35:39.410 START TEST fio_dif_digest 00:35:39.410 ************************************ 00:35:39.410 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:35:39.410 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:39.410 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:39.410 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:39.410 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:39.410 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:39.410 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:39.410 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:39.410 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:39.410 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:39.410 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:39.410 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:39.411 bdev_null0 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:39.411 [2024-11-29 13:19:38.850902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:39.411 { 00:35:39.411 "params": { 00:35:39.411 "name": "Nvme$subsystem", 00:35:39.411 "trtype": "$TEST_TRANSPORT", 00:35:39.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.411 "adrfam": "ipv4", 00:35:39.411 "trsvcid": "$NVMF_PORT", 00:35:39.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.411 "hdgst": ${hdgst:-false}, 00:35:39.411 "ddgst": ${ddgst:-false} 00:35:39.411 }, 00:35:39.411 "method": "bdev_nvme_attach_controller" 00:35:39.411 } 00:35:39.411 EOF 00:35:39.411 )") 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:39.411 "params": { 00:35:39.411 "name": "Nvme0", 00:35:39.411 "trtype": "tcp", 00:35:39.411 "traddr": "10.0.0.2", 00:35:39.411 "adrfam": "ipv4", 00:35:39.411 "trsvcid": "4420", 00:35:39.411 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:39.411 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:39.411 "hdgst": true, 00:35:39.411 "ddgst": true 00:35:39.411 }, 00:35:39.411 "method": "bdev_nvme_attach_controller" 00:35:39.411 }' 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:39.411 13:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.411 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:39.411 ... 00:35:39.411 fio-3.35 00:35:39.411 Starting 3 threads 00:35:51.668 00:35:51.668 filename0: (groupid=0, jobs=1): err= 0: pid=2254757: Fri Nov 29 13:19:49 2024 00:35:51.668 read: IOPS=283, BW=35.5MiB/s (37.2MB/s)(357MiB/10045msec) 00:35:51.668 slat (nsec): min=6585, max=77373, avg=20017.69, stdev=7549.84 00:35:51.668 clat (usec): min=8076, max=52040, avg=10529.39, stdev=1298.62 00:35:51.668 lat (usec): min=8105, max=52053, avg=10549.41, stdev=1298.39 00:35:51.668 clat percentiles (usec): 00:35:51.668 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:35:51.668 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:35:51.668 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:35:51.668 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13435], 99.95th=[49021], 00:35:51.668 | 99.99th=[52167] 00:35:51.668 bw ( KiB/s): min=35072, max=37632, per=35.05%, avg=36480.00, stdev=758.96, samples=20 00:35:51.668 iops : min= 274, max= 294, avg=285.00, stdev= 5.93, samples=20 00:35:51.668 lat (msec) : 10=24.23%, 20=75.70%, 50=0.04%, 100=0.04% 00:35:51.668 cpu : usr=95.68%, sys=3.99%, ctx=32, majf=0, minf=58 00:35:51.668 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.668 issued rwts: total=2852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.668 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:51.668 filename0: (groupid=0, jobs=1): err= 0: pid=2254758: Fri Nov 29 13:19:49 2024 00:35:51.668 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(329MiB/10046msec) 00:35:51.668 slat (nsec): min=6537, max=41930, avg=16831.97, stdev=7159.04 00:35:51.668 clat (usec): min=8286, max=48149, avg=11432.25, stdev=1248.27 00:35:51.668 lat (usec): min=8299, max=48163, avg=11449.08, stdev=1248.31 00:35:51.668 clat percentiles (usec): 00:35:51.668 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:35:51.668 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:35:51.668 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12780], 00:35:51.668 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14091], 99.95th=[45876], 00:35:51.668 | 99.99th=[47973] 00:35:51.668 bw ( KiB/s): min=32256, max=35072, per=32.30%, avg=33612.80, stdev=770.69, samples=20 00:35:51.668 iops : min= 252, max= 274, avg=262.60, stdev= 6.02, samples=20 00:35:51.668 lat (msec) : 10=3.35%, 20=96.58%, 50=0.08% 00:35:51.668 cpu : usr=96.51%, sys=3.19%, ctx=19, majf=0, minf=46 00:35:51.668 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.668 issued rwts: total=2628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.668 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:51.668 filename0: (groupid=0, jobs=1): err= 0: pid=2254759: Fri Nov 29 13:19:49 2024 00:35:51.668 read: IOPS=267, BW=33.4MiB/s (35.1MB/s)(336MiB/10046msec) 00:35:51.668 slat (nsec): min=6983, max=90500, avg=16332.82, stdev=5536.63 00:35:51.668 clat (usec): min=8278, max=48919, avg=11177.10, stdev=1264.37 00:35:51.668 lat (usec): min=8293, max=48930, avg=11193.44, stdev=1264.33 00:35:51.668 clat percentiles (usec): 00:35:51.668 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:35:51.668 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:35:51.668 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:35:51.668 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14615], 99.95th=[45876], 00:35:51.668 | 99.99th=[49021] 00:35:51.668 bw ( KiB/s): min=33280, max=35328, per=33.03%, avg=34380.80, stdev=499.04, samples=20 00:35:51.668 iops : min= 260, max= 276, avg=268.60, stdev= 3.90, samples=20 00:35:51.668 lat (msec) : 10=6.21%, 20=93.71%, 50=0.07% 00:35:51.668 cpu : usr=96.26%, sys=3.43%, ctx=21, majf=0, minf=84 00:35:51.668 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.668 issued rwts: total=2688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.668 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:51.668 00:35:51.668 Run status group 0 (all jobs): 00:35:51.668 READ: bw=102MiB/s (107MB/s), 32.7MiB/s-35.5MiB/s (34.3MB/s-37.2MB/s), io=1021MiB (1071MB), run=10045-10046msec 00:35:51.668 13:19:49 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:51.668 13:19:49 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:51.668 13:19:49 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:51.668 13:19:49 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:51.668 13:19:49 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:51.668 13:19:49 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:51.668 13:19:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.669 13:19:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:51.669 13:19:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.669 13:19:49 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:51.669 13:19:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.669 13:19:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:51.669 13:19:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.669 00:35:51.669 real 0m11.183s 00:35:51.669 user 0m35.695s 00:35:51.669 sys 0m1.372s 00:35:51.669 13:19:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.669 13:19:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:51.669 ************************************ 00:35:51.669 END TEST fio_dif_digest 00:35:51.669 ************************************ 00:35:51.669 13:19:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:51.669 13:19:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:51.669 13:19:50 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:51.669 13:19:50 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:51.669 13:19:50 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:51.669 13:19:50 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:51.669 13:19:50 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:51.669 13:19:50 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:51.669 rmmod nvme_tcp 00:35:51.669 rmmod nvme_fabrics 00:35:51.669 rmmod nvme_keyring 00:35:51.669 13:19:50 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:51.669 13:19:50 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:51.669 13:19:50 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:51.669 13:19:50 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2246214 ']' 00:35:51.669 13:19:50 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2246214 00:35:51.669 13:19:50 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2246214 ']' 00:35:51.669 13:19:50 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2246214 00:35:51.669 13:19:50 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:51.669 13:19:50 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:51.669 13:19:50 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2246214 00:35:51.669 13:19:50 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:51.669 13:19:50 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:51.669 13:19:50 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2246214' 00:35:51.669 killing process with pid 2246214 00:35:51.669 13:19:50 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2246214 00:35:51.669 13:19:50 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2246214 00:35:51.669 13:19:50 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:51.669 13:19:50 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:53.044 Waiting for block devices as requested 00:35:53.044 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:53.044 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:53.302 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:53.303 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:53.303 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:53.303 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:53.562 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:53.562 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:53.562 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:53.562 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:53.821 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:53.821 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:53.821 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:53.821 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:54.078 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:54.078 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:54.078 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:54.336 13:19:53 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:54.336 13:19:53 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:54.336 13:19:53 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:54.336 13:19:53 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:54.336 13:19:53 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:54.336 13:19:53 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:54.336 13:19:53 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:54.336 13:19:53 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:54.336 13:19:53 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.336 13:19:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:54.336 13:19:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.239 13:19:55 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:56.239 00:35:56.239 real 1m12.997s 00:35:56.239 user 7m8.652s 00:35:56.239 sys 0m20.020s 00:35:56.239 13:19:56 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.239 13:19:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:56.239 ************************************ 00:35:56.239 END TEST nvmf_dif 00:35:56.239 ************************************ 00:35:56.239 13:19:56 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:56.239 13:19:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:56.239 13:19:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.239 13:19:56 -- common/autotest_common.sh@10 -- # set +x 00:35:56.498 ************************************ 00:35:56.498 START TEST nvmf_abort_qd_sizes 00:35:56.498 ************************************ 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:56.498 * Looking for test storage... 00:35:56.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:56.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.498 --rc genhtml_branch_coverage=1 00:35:56.498 --rc genhtml_function_coverage=1 00:35:56.498 --rc genhtml_legend=1 00:35:56.498 --rc geninfo_all_blocks=1 00:35:56.498 --rc geninfo_unexecuted_blocks=1 00:35:56.498 00:35:56.498 ' 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:56.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.498 --rc genhtml_branch_coverage=1 00:35:56.498 --rc genhtml_function_coverage=1 00:35:56.498 --rc genhtml_legend=1 00:35:56.498 --rc geninfo_all_blocks=1 00:35:56.498 --rc geninfo_unexecuted_blocks=1 00:35:56.498 00:35:56.498 ' 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:56.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.498 --rc genhtml_branch_coverage=1 00:35:56.498 --rc genhtml_function_coverage=1 00:35:56.498 --rc genhtml_legend=1 00:35:56.498 --rc geninfo_all_blocks=1 00:35:56.498 --rc geninfo_unexecuted_blocks=1 00:35:56.498 00:35:56.498 ' 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:56.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.498 --rc genhtml_branch_coverage=1 00:35:56.498 --rc genhtml_function_coverage=1 00:35:56.498 --rc genhtml_legend=1 00:35:56.498 --rc geninfo_all_blocks=1 00:35:56.498 --rc geninfo_unexecuted_blocks=1 00:35:56.498 00:35:56.498 ' 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.498 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:56.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:56.499 13:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:01.769 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:01.769 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:01.769 Found net devices under 0000:86:00.0: cvl_0_0 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:01.769 Found net devices under 0000:86:00.1: cvl_0_1 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.769 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:01.770 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:02.028 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:02.028 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:02.028 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:02.028 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:02.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:02.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:36:02.028 00:36:02.028 --- 10.0.0.2 ping statistics --- 00:36:02.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.028 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:36:02.028 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:02.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:02.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:36:02.028 00:36:02.028 --- 10.0.0.1 ping statistics --- 00:36:02.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.028 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:36:02.028 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:02.028 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:36:02.028 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:02.028 13:20:01 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:04.563 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:04.563 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:05.497 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2262550 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2262550 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2262550 ']' 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:05.498 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:05.498 [2024-11-29 13:20:05.303459] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:36:05.498 [2024-11-29 13:20:05.303502] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:05.755 [2024-11-29 13:20:05.370018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:05.755 [2024-11-29 13:20:05.414379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:05.755 [2024-11-29 13:20:05.414417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:05.755 [2024-11-29 13:20:05.414424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:05.755 [2024-11-29 13:20:05.414430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:05.755 [2024-11-29 13:20:05.414435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:05.755 [2024-11-29 13:20:05.416039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.755 [2024-11-29 13:20:05.416057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:05.755 [2024-11-29 13:20:05.416078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:05.755 [2024-11-29 13:20:05.416079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:05.755 13:20:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:06.013 ************************************ 00:36:06.013 START TEST spdk_target_abort 00:36:06.013 ************************************ 00:36:06.013 13:20:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:36:06.013 13:20:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:06.013 13:20:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:36:06.013 13:20:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.013 13:20:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.298 spdk_targetn1 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.298 [2024-11-29 13:20:08.433474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.298 [2024-11-29 13:20:08.486016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:09.298 13:20:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:12.580 Initializing NVMe Controllers 00:36:12.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:12.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:12.580 Initialization complete. Launching workers. 00:36:12.580 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16339, failed: 0 00:36:12.580 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1527, failed to submit 14812 00:36:12.580 success 764, unsuccessful 763, failed 0 00:36:12.580 13:20:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:12.580 13:20:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:15.889 Initializing NVMe Controllers 00:36:15.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:15.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:15.889 Initialization complete. Launching workers. 00:36:15.889 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8519, failed: 0 00:36:15.889 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1223, failed to submit 7296 00:36:15.889 success 360, unsuccessful 863, failed 0 00:36:15.889 13:20:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:15.889 13:20:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:19.171 Initializing NVMe Controllers 00:36:19.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:19.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:19.171 Initialization complete. Launching workers. 00:36:19.171 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37525, failed: 0 00:36:19.171 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2836, failed to submit 34689 00:36:19.171 success 593, unsuccessful 2243, failed 0 00:36:19.171 13:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:19.171 13:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.171 13:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.171 13:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.171 13:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:19.171 13:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.171 13:20:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2262550 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2262550 ']' 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2262550 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2262550 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2262550' 00:36:20.106 killing process with pid 2262550 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2262550 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2262550 00:36:20.106 00:36:20.106 real 0m14.300s 00:36:20.106 user 0m54.448s 00:36:20.106 sys 0m2.655s 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:20.106 13:20:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.106 ************************************ 00:36:20.106 END TEST spdk_target_abort 00:36:20.106 ************************************ 00:36:20.365 13:20:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:20.365 13:20:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:20.365 13:20:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:20.365 13:20:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:20.365 ************************************ 00:36:20.365 START TEST kernel_target_abort 00:36:20.365 ************************************ 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:20.365 13:20:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:20.365 13:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:20.365 13:20:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:22.897 Waiting for block devices as requested 00:36:22.897 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:22.897 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:22.897 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:22.897 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:22.897 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:22.897 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:22.897 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:22.897 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:23.156 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:23.156 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:23.156 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:23.156 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:23.415 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:23.415 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:23.415 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:23.673 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:23.673 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:23.674 No valid GPT data, bailing 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:23.674 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:23.932 00:36:23.932 Discovery Log Number of Records 2, Generation counter 2 00:36:23.932 =====Discovery Log Entry 0====== 00:36:23.932 trtype: tcp 00:36:23.932 adrfam: ipv4 00:36:23.932 subtype: current discovery subsystem 00:36:23.932 treq: not specified, sq flow control disable supported 00:36:23.932 portid: 1 00:36:23.932 trsvcid: 4420 00:36:23.932 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:23.932 traddr: 10.0.0.1 00:36:23.932 eflags: none 00:36:23.932 sectype: none 00:36:23.932 =====Discovery Log Entry 1====== 00:36:23.932 trtype: tcp 00:36:23.932 adrfam: ipv4 00:36:23.932 subtype: nvme subsystem 00:36:23.932 treq: not specified, sq flow control disable supported 00:36:23.932 portid: 1 00:36:23.932 trsvcid: 4420 00:36:23.932 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:23.932 traddr: 10.0.0.1 00:36:23.932 eflags: none 00:36:23.932 sectype: none 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:23.932 13:20:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:27.218 Initializing NVMe Controllers 00:36:27.218 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:27.218 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:27.218 Initialization complete. Launching workers. 00:36:27.218 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89769, failed: 0 00:36:27.218 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 89769, failed to submit 0 00:36:27.218 success 0, unsuccessful 89769, failed 0 00:36:27.218 13:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:27.218 13:20:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:30.502 Initializing NVMe Controllers 00:36:30.502 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:30.502 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:30.502 Initialization complete. Launching workers. 00:36:30.502 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142764, failed: 0 00:36:30.502 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35862, failed to submit 106902 00:36:30.502 success 0, unsuccessful 35862, failed 0 00:36:30.502 13:20:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:30.502 13:20:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:33.788 Initializing NVMe Controllers 00:36:33.789 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:33.789 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:33.789 Initialization complete. Launching workers. 00:36:33.789 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 134129, failed: 0 00:36:33.789 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33562, failed to submit 100567 00:36:33.789 success 0, unsuccessful 33562, failed 0 00:36:33.789 13:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:33.789 13:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:33.789 13:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:33.789 13:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:33.789 13:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:33.789 13:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:33.789 13:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:33.789 13:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:33.789 13:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:33.789 13:20:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:35.692 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:35.692 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:35.692 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:35.692 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:35.692 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:35.692 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:35.692 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:35.950 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:35.950 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:35.950 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:35.950 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:35.950 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:35.950 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:35.950 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:35.950 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:35.950 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:36.887 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:36.887 00:36:36.887 real 0m16.555s 00:36:36.887 user 0m8.564s 00:36:36.887 sys 0m4.487s 00:36:36.887 13:20:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:36.887 13:20:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:36.887 ************************************ 00:36:36.887 END TEST kernel_target_abort 00:36:36.887 ************************************ 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:36.887 rmmod nvme_tcp 00:36:36.887 rmmod nvme_fabrics 00:36:36.887 rmmod nvme_keyring 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2262550 ']' 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2262550 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2262550 ']' 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2262550 00:36:36.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2262550) - No such process 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2262550 is not found' 00:36:36.887 Process with pid 2262550 is not found 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:36.887 13:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:39.419 Waiting for block devices as requested 00:36:39.420 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:39.420 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:39.420 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:39.420 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:39.679 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:39.679 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:39.679 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:39.679 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:39.937 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:39.937 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:39.937 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:39.937 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:40.199 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:40.199 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:40.199 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:40.459 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:40.459 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:40.459 13:20:40 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:40.459 13:20:40 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:40.459 13:20:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:40.459 13:20:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:40.459 13:20:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:40.459 13:20:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:40.459 13:20:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:40.459 13:20:40 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:40.459 13:20:40 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.459 13:20:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:40.459 13:20:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.991 13:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:42.991 00:36:42.991 real 0m46.188s 00:36:42.991 user 1m6.899s 00:36:42.991 sys 0m15.045s 00:36:42.991 13:20:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:42.991 13:20:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:42.991 ************************************ 00:36:42.991 END TEST nvmf_abort_qd_sizes 00:36:42.991 ************************************ 00:36:42.991 13:20:42 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:42.991 13:20:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:42.991 13:20:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:42.991 13:20:42 -- common/autotest_common.sh@10 -- # set +x 00:36:42.991 ************************************ 00:36:42.991 START TEST keyring_file 00:36:42.991 ************************************ 00:36:42.991 13:20:42 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:42.991 * Looking for test storage... 00:36:42.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:42.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.992 --rc genhtml_branch_coverage=1 00:36:42.992 --rc genhtml_function_coverage=1 00:36:42.992 --rc genhtml_legend=1 00:36:42.992 --rc geninfo_all_blocks=1 00:36:42.992 --rc geninfo_unexecuted_blocks=1 00:36:42.992 00:36:42.992 ' 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:42.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.992 --rc genhtml_branch_coverage=1 00:36:42.992 --rc genhtml_function_coverage=1 00:36:42.992 --rc genhtml_legend=1 00:36:42.992 --rc geninfo_all_blocks=1 00:36:42.992 --rc geninfo_unexecuted_blocks=1 00:36:42.992 00:36:42.992 ' 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:42.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.992 --rc genhtml_branch_coverage=1 00:36:42.992 --rc genhtml_function_coverage=1 00:36:42.992 --rc genhtml_legend=1 00:36:42.992 --rc geninfo_all_blocks=1 00:36:42.992 --rc geninfo_unexecuted_blocks=1 00:36:42.992 00:36:42.992 ' 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:42.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.992 --rc genhtml_branch_coverage=1 00:36:42.992 --rc genhtml_function_coverage=1 00:36:42.992 --rc genhtml_legend=1 00:36:42.992 --rc geninfo_all_blocks=1 00:36:42.992 --rc geninfo_unexecuted_blocks=1 00:36:42.992 00:36:42.992 ' 00:36:42.992 13:20:42 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:42.992 13:20:42 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:42.992 13:20:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.992 13:20:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.992 13:20:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.992 13:20:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:42.992 13:20:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:42.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:42.992 13:20:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:42.992 13:20:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:42.992 13:20:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:42.992 13:20:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:42.992 13:20:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:42.992 13:20:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cnGMUKPRdm 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cnGMUKPRdm 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cnGMUKPRdm 00:36:42.992 13:20:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.cnGMUKPRdm 00:36:42.992 13:20:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HesbKVlmgd 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:42.992 13:20:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HesbKVlmgd 00:36:42.992 13:20:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HesbKVlmgd 00:36:42.992 13:20:42 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.HesbKVlmgd 00:36:42.992 13:20:42 keyring_file -- keyring/file.sh@30 -- # tgtpid=2271149 00:36:42.992 13:20:42 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:42.992 13:20:42 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2271149 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2271149 ']' 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:42.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:42.992 13:20:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:42.992 [2024-11-29 13:20:42.683995] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:36:42.992 [2024-11-29 13:20:42.684047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2271149 ] 00:36:42.992 [2024-11-29 13:20:42.745596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:42.992 [2024-11-29 13:20:42.787919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.251 13:20:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:43.251 13:20:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:43.251 13:20:42 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:43.251 13:20:42 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.251 13:20:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:43.251 [2024-11-29 13:20:43.003637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:43.251 null0 00:36:43.251 [2024-11-29 13:20:43.035691] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:43.251 [2024-11-29 13:20:43.036063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:43.251 13:20:43 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.251 13:20:43 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:43.251 13:20:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:43.251 13:20:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:43.251 13:20:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:43.251 13:20:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:43.251 13:20:43 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:43.251 13:20:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:43.251 13:20:43 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:43.251 13:20:43 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.251 13:20:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:43.251 [2024-11-29 13:20:43.063759] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:43.251 request: 00:36:43.251 { 00:36:43.251 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:43.251 "secure_channel": false, 00:36:43.251 "listen_address": { 00:36:43.251 "trtype": "tcp", 00:36:43.251 "traddr": "127.0.0.1", 00:36:43.251 "trsvcid": "4420" 00:36:43.251 }, 00:36:43.251 "method": "nvmf_subsystem_add_listener", 00:36:43.251 "req_id": 1 00:36:43.251 } 00:36:43.251 Got JSON-RPC error response 00:36:43.251 response: 00:36:43.251 { 00:36:43.509 "code": -32602, 00:36:43.509 "message": "Invalid parameters" 00:36:43.509 } 00:36:43.509 13:20:43 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:43.509 13:20:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:43.509 13:20:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:43.509 13:20:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:43.509 13:20:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:43.509 13:20:43 keyring_file -- keyring/file.sh@47 -- # bperfpid=2271188 00:36:43.509 13:20:43 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2271188 /var/tmp/bperf.sock 00:36:43.509 13:20:43 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:43.509 13:20:43 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2271188 ']' 00:36:43.509 13:20:43 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:43.509 13:20:43 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:43.509 13:20:43 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:43.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:43.509 13:20:43 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:43.509 13:20:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:43.509 [2024-11-29 13:20:43.116752] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:36:43.509 [2024-11-29 13:20:43.116803] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2271188 ] 00:36:43.509 [2024-11-29 13:20:43.179825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.509 [2024-11-29 13:20:43.222859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:43.509 13:20:43 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:43.509 13:20:43 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:43.509 13:20:43 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cnGMUKPRdm 00:36:43.509 13:20:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cnGMUKPRdm 00:36:43.767 13:20:43 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HesbKVlmgd 00:36:43.767 13:20:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HesbKVlmgd 00:36:44.026 13:20:43 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:44.026 13:20:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:44.026 13:20:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.026 13:20:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:44.026 13:20:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.284 13:20:43 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.cnGMUKPRdm == \/\t\m\p\/\t\m\p\.\c\n\G\M\U\K\P\R\d\m ]] 00:36:44.284 13:20:43 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:44.284 13:20:43 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:44.284 13:20:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.284 13:20:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:44.284 13:20:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.543 13:20:44 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.HesbKVlmgd == \/\t\m\p\/\t\m\p\.\H\e\s\b\K\V\l\m\g\d ]] 00:36:44.543 13:20:44 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:44.543 13:20:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:44.543 13:20:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.543 13:20:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.543 13:20:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.543 13:20:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:44.543 13:20:44 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:44.543 13:20:44 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:44.543 13:20:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:44.543 13:20:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.543 13:20:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.543 13:20:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:44.543 13:20:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.801 13:20:44 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:44.801 13:20:44 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:44.801 13:20:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:45.060 [2024-11-29 13:20:44.667727] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:45.060 nvme0n1 00:36:45.060 13:20:44 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:45.060 13:20:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:45.060 13:20:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.060 13:20:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.060 13:20:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:45.060 13:20:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.318 13:20:44 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:45.318 13:20:44 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:45.318 13:20:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:45.318 13:20:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.318 13:20:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.318 13:20:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.318 13:20:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:45.576 13:20:45 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:45.576 13:20:45 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:45.576 Running I/O for 1 seconds... 00:36:46.511 17689.00 IOPS, 69.10 MiB/s 00:36:46.511 Latency(us) 00:36:46.511 [2024-11-29T12:20:46.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:46.512 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:46.512 nvme0n1 : 1.00 17736.83 69.28 0.00 0.00 7203.84 4530.53 17666.23 00:36:46.512 [2024-11-29T12:20:46.332Z] =================================================================================================================== 00:36:46.512 [2024-11-29T12:20:46.332Z] Total : 17736.83 69.28 0.00 0.00 7203.84 4530.53 17666.23 00:36:46.512 { 00:36:46.512 "results": [ 00:36:46.512 { 00:36:46.512 "job": "nvme0n1", 00:36:46.512 "core_mask": "0x2", 00:36:46.512 "workload": "randrw", 00:36:46.512 "percentage": 50, 00:36:46.512 "status": "finished", 00:36:46.512 "queue_depth": 128, 00:36:46.512 "io_size": 4096, 00:36:46.512 "runtime": 1.00452, 00:36:46.512 "iops": 17736.82953052204, 00:36:46.512 "mibps": 69.28449035360173, 00:36:46.512 "io_failed": 0, 00:36:46.512 "io_timeout": 0, 00:36:46.512 "avg_latency_us": 7203.838388251573, 00:36:46.512 "min_latency_us": 4530.532173913043, 00:36:46.512 "max_latency_us": 17666.22608695652 00:36:46.512 } 00:36:46.512 ], 00:36:46.512 "core_count": 1 00:36:46.512 } 00:36:46.512 13:20:46 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:46.512 13:20:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:46.771 13:20:46 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:46.771 13:20:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:46.771 13:20:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.771 13:20:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.771 13:20:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:46.771 13:20:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.030 13:20:46 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:47.030 13:20:46 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:47.030 13:20:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:47.030 13:20:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:47.030 13:20:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:47.030 13:20:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:47.030 13:20:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.289 13:20:46 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:47.289 13:20:46 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:47.289 13:20:46 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:47.289 13:20:46 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:47.289 13:20:46 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:47.289 13:20:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:47.289 13:20:46 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:47.289 13:20:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:47.289 13:20:46 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:47.289 13:20:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:47.289 [2024-11-29 13:20:47.058632] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:47.289 [2024-11-29 13:20:47.058994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d95210 (107): Transport endpoint is not connected 00:36:47.289 [2024-11-29 13:20:47.059988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d95210 (9): Bad file descriptor 00:36:47.289 [2024-11-29 13:20:47.060988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:47.289 [2024-11-29 13:20:47.060998] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:47.289 [2024-11-29 13:20:47.061006] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:47.289 [2024-11-29 13:20:47.061016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:47.289 request: 00:36:47.289 { 00:36:47.289 "name": "nvme0", 00:36:47.289 "trtype": "tcp", 00:36:47.289 "traddr": "127.0.0.1", 00:36:47.289 "adrfam": "ipv4", 00:36:47.289 "trsvcid": "4420", 00:36:47.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:47.289 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:47.289 "prchk_reftag": false, 00:36:47.289 "prchk_guard": false, 00:36:47.289 "hdgst": false, 00:36:47.289 "ddgst": false, 00:36:47.289 "psk": "key1", 00:36:47.289 "allow_unrecognized_csi": false, 00:36:47.289 "method": "bdev_nvme_attach_controller", 00:36:47.289 "req_id": 1 00:36:47.289 } 00:36:47.289 Got JSON-RPC error response 00:36:47.289 response: 00:36:47.289 { 00:36:47.289 "code": -5, 00:36:47.289 "message": "Input/output error" 00:36:47.289 } 00:36:47.289 13:20:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:47.289 13:20:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:47.289 13:20:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:47.289 13:20:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:47.289 13:20:47 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:47.289 13:20:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:47.289 13:20:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:47.289 13:20:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:47.289 13:20:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.289 13:20:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:47.547 13:20:47 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:47.547 13:20:47 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:47.547 13:20:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:47.547 13:20:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:47.547 13:20:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:47.547 13:20:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:47.547 13:20:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.806 13:20:47 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:47.806 13:20:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:47.806 13:20:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:48.065 13:20:47 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:48.065 13:20:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:48.065 13:20:47 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:48.065 13:20:47 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:48.065 13:20:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.323 13:20:48 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:48.323 13:20:48 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.cnGMUKPRdm 00:36:48.323 13:20:48 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.cnGMUKPRdm 00:36:48.323 13:20:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:48.323 13:20:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.cnGMUKPRdm 00:36:48.323 13:20:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:48.323 13:20:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.323 13:20:48 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:48.323 13:20:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.323 13:20:48 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cnGMUKPRdm 00:36:48.323 13:20:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cnGMUKPRdm 00:36:48.582 [2024-11-29 13:20:48.242148] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cnGMUKPRdm': 0100660 00:36:48.582 [2024-11-29 13:20:48.242178] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:48.582 request: 00:36:48.582 { 00:36:48.582 "name": "key0", 00:36:48.582 "path": "/tmp/tmp.cnGMUKPRdm", 00:36:48.582 "method": "keyring_file_add_key", 00:36:48.582 "req_id": 1 00:36:48.582 } 00:36:48.582 Got JSON-RPC error response 00:36:48.582 response: 00:36:48.582 { 00:36:48.582 "code": -1, 00:36:48.582 "message": "Operation not permitted" 00:36:48.582 } 00:36:48.582 13:20:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:48.582 13:20:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:48.582 13:20:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:48.582 13:20:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:48.582 13:20:48 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.cnGMUKPRdm 00:36:48.582 13:20:48 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cnGMUKPRdm 00:36:48.582 13:20:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cnGMUKPRdm 00:36:48.840 13:20:48 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.cnGMUKPRdm 00:36:48.840 13:20:48 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:48.840 13:20:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:48.840 13:20:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.840 13:20:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.840 13:20:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.840 13:20:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:48.840 13:20:48 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:48.840 13:20:48 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.840 13:20:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:48.840 13:20:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.840 13:20:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:48.840 13:20:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.840 13:20:48 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:48.840 13:20:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:48.841 13:20:48 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.841 13:20:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:49.099 [2024-11-29 13:20:48.827717] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.cnGMUKPRdm': No such file or directory 00:36:49.099 [2024-11-29 13:20:48.827746] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:49.099 [2024-11-29 13:20:48.827763] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:49.099 [2024-11-29 13:20:48.827771] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:49.099 [2024-11-29 13:20:48.827778] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:49.099 [2024-11-29 13:20:48.827784] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:49.099 request: 00:36:49.099 { 00:36:49.099 "name": "nvme0", 00:36:49.099 "trtype": "tcp", 00:36:49.099 "traddr": "127.0.0.1", 00:36:49.099 "adrfam": "ipv4", 00:36:49.099 "trsvcid": "4420", 00:36:49.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:49.099 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:49.099 "prchk_reftag": false, 00:36:49.099 "prchk_guard": false, 00:36:49.099 "hdgst": false, 00:36:49.099 "ddgst": false, 00:36:49.099 "psk": "key0", 00:36:49.099 "allow_unrecognized_csi": false, 00:36:49.099 "method": "bdev_nvme_attach_controller", 00:36:49.099 "req_id": 1 00:36:49.099 } 00:36:49.099 Got JSON-RPC error response 00:36:49.099 response: 00:36:49.099 { 00:36:49.099 "code": -19, 00:36:49.099 "message": "No such device" 00:36:49.099 } 00:36:49.099 13:20:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:49.099 13:20:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:49.099 13:20:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:49.099 13:20:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:49.099 13:20:48 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:49.099 13:20:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:49.358 13:20:49 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:49.358 13:20:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:49.358 13:20:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:49.358 13:20:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:49.358 13:20:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:49.358 13:20:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:49.358 13:20:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Qe6N6BBo2j 00:36:49.358 13:20:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:49.358 13:20:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:49.358 13:20:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:49.358 13:20:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:49.358 13:20:49 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:49.358 13:20:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:49.358 13:20:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:49.358 13:20:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Qe6N6BBo2j 00:36:49.358 13:20:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Qe6N6BBo2j 00:36:49.358 13:20:49 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Qe6N6BBo2j 00:36:49.358 13:20:49 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qe6N6BBo2j 00:36:49.358 13:20:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Qe6N6BBo2j 00:36:49.617 13:20:49 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:49.617 13:20:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:49.875 nvme0n1 00:36:49.875 13:20:49 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:49.875 13:20:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:49.875 13:20:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.875 13:20:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:49.875 13:20:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.875 13:20:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.133 13:20:49 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:50.133 13:20:49 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:50.133 13:20:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:50.133 13:20:49 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:50.133 13:20:49 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:50.133 13:20:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.133 13:20:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.133 13:20:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.391 13:20:50 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:50.392 13:20:50 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:50.392 13:20:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:50.392 13:20:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.392 13:20:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.392 13:20:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.392 13:20:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.650 13:20:50 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:50.650 13:20:50 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:50.650 13:20:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:50.909 13:20:50 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:50.909 13:20:50 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:50.909 13:20:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.909 13:20:50 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:50.909 13:20:50 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qe6N6BBo2j 00:36:50.909 13:20:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Qe6N6BBo2j 00:36:51.167 13:20:50 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HesbKVlmgd 00:36:51.167 13:20:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HesbKVlmgd 00:36:51.425 13:20:51 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.425 13:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.683 nvme0n1 00:36:51.683 13:20:51 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:51.683 13:20:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:51.942 13:20:51 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:51.942 "subsystems": [ 00:36:51.942 { 00:36:51.942 "subsystem": "keyring", 00:36:51.942 "config": [ 00:36:51.942 { 00:36:51.942 "method": "keyring_file_add_key", 00:36:51.942 "params": { 00:36:51.942 "name": "key0", 00:36:51.942 "path": "/tmp/tmp.Qe6N6BBo2j" 00:36:51.942 } 00:36:51.942 }, 00:36:51.942 { 00:36:51.942 "method": "keyring_file_add_key", 00:36:51.942 "params": { 00:36:51.942 "name": "key1", 00:36:51.942 "path": "/tmp/tmp.HesbKVlmgd" 00:36:51.942 } 00:36:51.942 } 00:36:51.942 ] 00:36:51.942 }, 00:36:51.942 { 00:36:51.942 "subsystem": "iobuf", 00:36:51.942 "config": [ 00:36:51.942 { 00:36:51.942 "method": "iobuf_set_options", 00:36:51.942 "params": { 00:36:51.942 "small_pool_count": 8192, 00:36:51.942 "large_pool_count": 1024, 00:36:51.942 "small_bufsize": 8192, 00:36:51.942 "large_bufsize": 135168, 00:36:51.942 "enable_numa": false 00:36:51.942 } 00:36:51.942 } 00:36:51.942 ] 00:36:51.942 }, 00:36:51.942 { 00:36:51.942 "subsystem": "sock", 00:36:51.942 "config": [ 00:36:51.942 { 00:36:51.942 "method": "sock_set_default_impl", 00:36:51.942 "params": { 00:36:51.942 "impl_name": "posix" 00:36:51.942 } 00:36:51.942 }, 00:36:51.942 { 00:36:51.942 "method": "sock_impl_set_options", 00:36:51.943 "params": { 00:36:51.943 "impl_name": "ssl", 00:36:51.943 "recv_buf_size": 4096, 00:36:51.943 "send_buf_size": 4096, 00:36:51.943 "enable_recv_pipe": true, 00:36:51.943 "enable_quickack": false, 00:36:51.943 "enable_placement_id": 0, 00:36:51.943 "enable_zerocopy_send_server": true, 00:36:51.943 "enable_zerocopy_send_client": false, 00:36:51.943 "zerocopy_threshold": 0, 00:36:51.943 "tls_version": 0, 00:36:51.943 "enable_ktls": false 00:36:51.943 } 00:36:51.943 }, 00:36:51.943 { 00:36:51.943 "method": "sock_impl_set_options", 00:36:51.943 "params": { 00:36:51.943 "impl_name": "posix", 00:36:51.943 "recv_buf_size": 2097152, 00:36:51.943 "send_buf_size": 2097152, 00:36:51.943 "enable_recv_pipe": true, 00:36:51.943 "enable_quickack": false, 00:36:51.943 "enable_placement_id": 0, 00:36:51.943 "enable_zerocopy_send_server": true, 00:36:51.943 "enable_zerocopy_send_client": false, 00:36:51.943 "zerocopy_threshold": 0, 00:36:51.943 "tls_version": 0, 00:36:51.943 "enable_ktls": false 00:36:51.943 } 00:36:51.943 } 00:36:51.943 ] 00:36:51.943 }, 00:36:51.943 { 00:36:51.943 "subsystem": "vmd", 00:36:51.943 "config": [] 00:36:51.943 }, 00:36:51.943 { 00:36:51.943 "subsystem": "accel", 00:36:51.943 "config": [ 00:36:51.943 { 00:36:51.943 "method": "accel_set_options", 00:36:51.943 "params": { 00:36:51.943 "small_cache_size": 128, 00:36:51.943 "large_cache_size": 16, 00:36:51.943 "task_count": 2048, 00:36:51.943 "sequence_count": 2048, 00:36:51.943 "buf_count": 2048 00:36:51.943 } 00:36:51.943 } 00:36:51.943 ] 00:36:51.943 }, 00:36:51.943 { 00:36:51.943 "subsystem": "bdev", 00:36:51.943 "config": [ 00:36:51.943 { 00:36:51.943 "method": "bdev_set_options", 00:36:51.943 "params": { 00:36:51.943 "bdev_io_pool_size": 65535, 00:36:51.943 "bdev_io_cache_size": 256, 00:36:51.943 "bdev_auto_examine": true, 00:36:51.943 "iobuf_small_cache_size": 128, 00:36:51.943 "iobuf_large_cache_size": 16 00:36:51.943 } 00:36:51.943 }, 00:36:51.943 { 00:36:51.943 "method": "bdev_raid_set_options", 00:36:51.943 "params": { 00:36:51.943 "process_window_size_kb": 1024, 00:36:51.943 "process_max_bandwidth_mb_sec": 0 00:36:51.943 } 00:36:51.943 }, 00:36:51.943 { 00:36:51.943 "method": "bdev_iscsi_set_options", 00:36:51.943 "params": { 00:36:51.943 "timeout_sec": 30 00:36:51.943 } 00:36:51.943 }, 00:36:51.943 { 00:36:51.943 "method": "bdev_nvme_set_options", 00:36:51.943 "params": { 00:36:51.943 "action_on_timeout": "none", 00:36:51.943 "timeout_us": 0, 00:36:51.943 "timeout_admin_us": 0, 00:36:51.943 "keep_alive_timeout_ms": 10000, 00:36:51.943 "arbitration_burst": 0, 00:36:51.943 "low_priority_weight": 0, 00:36:51.943 "medium_priority_weight": 0, 00:36:51.943 "high_priority_weight": 0, 00:36:51.943 "nvme_adminq_poll_period_us": 10000, 00:36:51.943 "nvme_ioq_poll_period_us": 0, 00:36:51.943 "io_queue_requests": 512, 00:36:51.943 "delay_cmd_submit": true, 00:36:51.943 "transport_retry_count": 4, 00:36:51.943 "bdev_retry_count": 3, 00:36:51.943 "transport_ack_timeout": 0, 00:36:51.943 "ctrlr_loss_timeout_sec": 0, 00:36:51.943 "reconnect_delay_sec": 0, 00:36:51.943 "fast_io_fail_timeout_sec": 0, 00:36:51.943 "disable_auto_failback": false, 00:36:51.943 "generate_uuids": false, 00:36:51.943 "transport_tos": 0, 00:36:51.943 "nvme_error_stat": false, 00:36:51.943 "rdma_srq_size": 0, 00:36:51.943 "io_path_stat": false, 00:36:51.943 "allow_accel_sequence": false, 00:36:51.943 "rdma_max_cq_size": 0, 00:36:51.943 "rdma_cm_event_timeout_ms": 0, 00:36:51.943 "dhchap_digests": [ 00:36:51.943 "sha256", 00:36:51.943 "sha384", 00:36:51.943 "sha512" 00:36:51.943 ], 00:36:51.943 "dhchap_dhgroups": [ 00:36:51.943 "null", 00:36:51.943 "ffdhe2048", 00:36:51.943 "ffdhe3072", 00:36:51.943 "ffdhe4096", 00:36:51.943 "ffdhe6144", 00:36:51.943 "ffdhe8192" 00:36:51.943 ] 00:36:51.943 } 00:36:51.943 }, 00:36:51.943 { 00:36:51.943 "method": "bdev_nvme_attach_controller", 00:36:51.943 "params": { 00:36:51.943 "name": "nvme0", 00:36:51.943 "trtype": "TCP", 00:36:51.943 "adrfam": "IPv4", 00:36:51.943 "traddr": "127.0.0.1", 00:36:51.943 "trsvcid": "4420", 00:36:51.943 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:51.943 "prchk_reftag": false, 00:36:51.943 "prchk_guard": false, 00:36:51.943 "ctrlr_loss_timeout_sec": 0, 00:36:51.943 "reconnect_delay_sec": 0, 00:36:51.943 "fast_io_fail_timeout_sec": 0, 00:36:51.943 "psk": "key0", 00:36:51.943 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:51.943 "hdgst": false, 00:36:51.943 "ddgst": false, 00:36:51.943 "multipath": "multipath" 00:36:51.943 } 00:36:51.943 }, 00:36:51.943 { 00:36:51.943 "method": "bdev_nvme_set_hotplug", 00:36:51.943 "params": { 00:36:51.943 "period_us": 100000, 00:36:51.943 "enable": false 00:36:51.943 } 00:36:51.943 }, 00:36:51.943 { 00:36:51.943 "method": "bdev_wait_for_examine" 00:36:51.943 } 00:36:51.943 ] 00:36:51.943 }, 00:36:51.943 { 00:36:51.943 "subsystem": "nbd", 00:36:51.943 "config": [] 00:36:51.943 } 00:36:51.943 ] 00:36:51.943 }' 00:36:51.943 13:20:51 keyring_file -- keyring/file.sh@115 -- # killprocess 2271188 00:36:51.943 13:20:51 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2271188 ']' 00:36:51.943 13:20:51 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2271188 00:36:51.943 13:20:51 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:51.943 13:20:51 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:51.943 13:20:51 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2271188 00:36:51.943 13:20:51 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:51.943 13:20:51 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:51.943 13:20:51 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2271188' 00:36:51.943 killing process with pid 2271188 00:36:51.943 13:20:51 keyring_file -- common/autotest_common.sh@973 -- # kill 2271188 00:36:51.943 Received shutdown signal, test time was about 1.000000 seconds 00:36:51.943 00:36:51.943 Latency(us) 00:36:51.943 [2024-11-29T12:20:51.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.943 [2024-11-29T12:20:51.763Z] =================================================================================================================== 00:36:51.943 [2024-11-29T12:20:51.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:51.944 13:20:51 keyring_file -- common/autotest_common.sh@978 -- # wait 2271188 00:36:52.202 13:20:51 keyring_file -- keyring/file.sh@118 -- # bperfpid=2272709 00:36:52.202 13:20:51 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2272709 /var/tmp/bperf.sock 00:36:52.202 13:20:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2272709 ']' 00:36:52.203 13:20:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:52.203 13:20:51 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:52.203 13:20:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:52.203 13:20:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:52.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:52.203 13:20:51 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:52.203 "subsystems": [ 00:36:52.203 { 00:36:52.203 "subsystem": "keyring", 00:36:52.203 "config": [ 00:36:52.203 { 00:36:52.203 "method": "keyring_file_add_key", 00:36:52.203 "params": { 00:36:52.203 "name": "key0", 00:36:52.203 "path": "/tmp/tmp.Qe6N6BBo2j" 00:36:52.203 } 00:36:52.203 }, 00:36:52.203 { 00:36:52.203 "method": "keyring_file_add_key", 00:36:52.203 "params": { 00:36:52.203 "name": "key1", 00:36:52.203 "path": "/tmp/tmp.HesbKVlmgd" 00:36:52.203 } 00:36:52.203 } 00:36:52.203 ] 00:36:52.203 }, 00:36:52.203 { 00:36:52.203 "subsystem": "iobuf", 00:36:52.203 "config": [ 00:36:52.203 { 00:36:52.203 "method": "iobuf_set_options", 00:36:52.203 "params": { 00:36:52.203 "small_pool_count": 8192, 00:36:52.203 "large_pool_count": 1024, 00:36:52.203 "small_bufsize": 8192, 00:36:52.203 "large_bufsize": 135168, 00:36:52.203 "enable_numa": false 00:36:52.203 } 00:36:52.203 } 00:36:52.203 ] 00:36:52.203 }, 00:36:52.203 { 00:36:52.203 "subsystem": "sock", 00:36:52.203 "config": [ 00:36:52.203 { 00:36:52.203 "method": "sock_set_default_impl", 00:36:52.203 "params": { 00:36:52.203 "impl_name": "posix" 00:36:52.203 } 00:36:52.203 }, 00:36:52.203 { 00:36:52.203 "method": "sock_impl_set_options", 00:36:52.203 "params": { 00:36:52.203 "impl_name": "ssl", 00:36:52.203 "recv_buf_size": 4096, 00:36:52.203 "send_buf_size": 4096, 00:36:52.203 "enable_recv_pipe": true, 00:36:52.203 "enable_quickack": false, 00:36:52.203 "enable_placement_id": 0, 00:36:52.203 "enable_zerocopy_send_server": true, 00:36:52.203 "enable_zerocopy_send_client": false, 00:36:52.203 "zerocopy_threshold": 0, 00:36:52.203 "tls_version": 0, 00:36:52.203 "enable_ktls": false 00:36:52.203 } 00:36:52.203 }, 00:36:52.203 { 00:36:52.203 "method": "sock_impl_set_options", 00:36:52.203 "params": { 00:36:52.203 "impl_name": "posix", 00:36:52.203 "recv_buf_size": 2097152, 00:36:52.203 "send_buf_size": 2097152, 00:36:52.203 "enable_recv_pipe": true, 00:36:52.203 "enable_quickack": false, 00:36:52.203 "enable_placement_id": 0, 00:36:52.203 "enable_zerocopy_send_server": true, 00:36:52.203 "enable_zerocopy_send_client": false, 00:36:52.203 "zerocopy_threshold": 0, 00:36:52.203 "tls_version": 0, 00:36:52.203 "enable_ktls": false 00:36:52.203 } 00:36:52.203 } 00:36:52.203 ] 00:36:52.203 }, 00:36:52.203 { 00:36:52.203 "subsystem": "vmd", 00:36:52.203 "config": [] 00:36:52.203 }, 00:36:52.203 { 00:36:52.203 "subsystem": "accel", 00:36:52.203 "config": [ 00:36:52.203 { 00:36:52.203 "method": "accel_set_options", 00:36:52.203 "params": { 00:36:52.203 "small_cache_size": 128, 00:36:52.203 "large_cache_size": 16, 00:36:52.203 "task_count": 2048, 00:36:52.203 "sequence_count": 2048, 00:36:52.203 "buf_count": 2048 00:36:52.203 } 00:36:52.203 } 00:36:52.203 ] 00:36:52.203 }, 00:36:52.203 { 00:36:52.203 "subsystem": "bdev", 00:36:52.203 "config": [ 00:36:52.203 { 00:36:52.203 "method": "bdev_set_options", 00:36:52.203 "params": { 00:36:52.203 "bdev_io_pool_size": 65535, 00:36:52.203 "bdev_io_cache_size": 256, 00:36:52.203 "bdev_auto_examine": true, 00:36:52.203 "iobuf_small_cache_size": 128, 00:36:52.203 "iobuf_large_cache_size": 16 00:36:52.203 } 00:36:52.203 }, 00:36:52.203 { 00:36:52.203 "method": "bdev_raid_set_options", 00:36:52.203 "params": { 00:36:52.203 "process_window_size_kb": 1024, 00:36:52.203 "process_max_bandwidth_mb_sec": 0 00:36:52.203 } 00:36:52.203 }, 00:36:52.203 { 00:36:52.203 "method": "bdev_iscsi_set_options", 00:36:52.203 "params": { 00:36:52.203 "timeout_sec": 30 00:36:52.203 } 00:36:52.203 }, 00:36:52.203 { 00:36:52.203 "method": "bdev_nvme_set_options", 00:36:52.203 "params": { 00:36:52.203 "action_on_timeout": "none", 00:36:52.203 "timeout_us": 0, 00:36:52.203 "timeout_admin_us": 0, 00:36:52.203 "keep_alive_timeout_ms": 10000, 00:36:52.203 "arbitration_burst": 0, 00:36:52.203 "low_priority_weight": 0, 00:36:52.203 "medium_priority_weight": 0, 00:36:52.203 "high_priority_weight": 0, 00:36:52.203 "nvme_adminq_poll_period_us": 10000, 00:36:52.203 "nvme_ioq_poll_period_us": 0, 00:36:52.203 "io_queue_requests": 512, 00:36:52.203 "delay_cmd_submit": true, 00:36:52.203 "transport_retry_count": 4, 00:36:52.203 "bdev_retry_count": 3, 00:36:52.203 "transport_ack_timeout": 0, 00:36:52.203 "ctrlr_loss_timeout_sec": 0, 00:36:52.203 "reconnect_delay_sec": 0, 00:36:52.203 "fast_io_fail_timeout_sec": 0, 00:36:52.203 "disable_auto_failback": false, 00:36:52.203 "generate_uuids": false, 00:36:52.203 "transport_tos": 0, 00:36:52.203 "nvme_error_stat": false, 00:36:52.203 "rdma_srq_size": 0, 00:36:52.203 "io_path_stat": false, 00:36:52.203 "allow_accel_sequence": false, 00:36:52.203 "rdma_max_cq_size": 0, 00:36:52.203 "rdma_cm_event_timeout_ms": 0, 00:36:52.203 "dhchap_digests": [ 00:36:52.203 "sha256", 00:36:52.203 "sha384", 00:36:52.203 "sha512" 00:36:52.203 ], 00:36:52.203 "dhchap_dhgroups": [ 00:36:52.203 "null", 00:36:52.203 "ffdhe2048", 00:36:52.204 "ffdhe3072", 00:36:52.204 "ffdhe4096", 00:36:52.204 "ffdhe6144", 00:36:52.204 "ffdhe8192" 00:36:52.204 ] 00:36:52.204 } 00:36:52.204 }, 00:36:52.204 { 00:36:52.204 "method": "bdev_nvme_attach_controller", 00:36:52.204 "params": { 00:36:52.204 "name": "nvme0", 00:36:52.204 "trtype": "TCP", 00:36:52.204 "adrfam": "IPv4", 00:36:52.204 "traddr": "127.0.0.1", 00:36:52.204 "trsvcid": "4420", 00:36:52.204 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:52.204 "prchk_reftag": false, 00:36:52.204 "prchk_guard": false, 00:36:52.204 "ctrlr_loss_timeout_sec": 0, 00:36:52.204 "reconnect_delay_sec": 0, 00:36:52.204 "fast_io_fail_timeout_sec": 0, 00:36:52.204 "psk": "key0", 00:36:52.204 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:52.204 "hdgst": false, 00:36:52.204 "ddgst": false, 00:36:52.204 "multipath": "multipath" 00:36:52.204 } 00:36:52.204 }, 00:36:52.204 { 00:36:52.204 "method": "bdev_nvme_set_hotplug", 00:36:52.204 "params": { 00:36:52.204 "period_us": 100000, 00:36:52.204 "enable": false 00:36:52.204 } 00:36:52.204 }, 00:36:52.204 { 00:36:52.204 "method": "bdev_wait_for_examine" 00:36:52.204 } 00:36:52.204 ] 00:36:52.204 }, 00:36:52.204 { 00:36:52.204 "subsystem": "nbd", 00:36:52.204 "config": [] 00:36:52.204 } 00:36:52.204 ] 00:36:52.204 }' 00:36:52.204 13:20:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:52.204 13:20:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:52.204 [2024-11-29 13:20:51.859155] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:36:52.204 [2024-11-29 13:20:51.859203] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272709 ] 00:36:52.204 [2024-11-29 13:20:51.921587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:52.204 [2024-11-29 13:20:51.964604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:52.462 [2024-11-29 13:20:52.126139] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:53.029 13:20:52 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:53.029 13:20:52 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:53.029 13:20:52 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:53.029 13:20:52 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:53.029 13:20:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.288 13:20:52 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:53.288 13:20:52 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:53.288 13:20:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:53.288 13:20:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.288 13:20:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.288 13:20:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:53.288 13:20:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.288 13:20:53 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:53.288 13:20:53 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:53.288 13:20:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:53.288 13:20:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.288 13:20:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.288 13:20:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.288 13:20:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:53.546 13:20:53 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:53.546 13:20:53 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:53.546 13:20:53 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:53.546 13:20:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:53.805 13:20:53 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:53.805 13:20:53 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:53.805 13:20:53 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Qe6N6BBo2j /tmp/tmp.HesbKVlmgd 00:36:53.805 13:20:53 keyring_file -- keyring/file.sh@20 -- # killprocess 2272709 00:36:53.805 13:20:53 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2272709 ']' 00:36:53.805 13:20:53 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2272709 00:36:53.805 13:20:53 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:53.805 13:20:53 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:53.805 13:20:53 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2272709 00:36:53.805 13:20:53 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:53.805 13:20:53 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:53.805 13:20:53 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2272709' 00:36:53.805 killing process with pid 2272709 00:36:53.805 13:20:53 keyring_file -- common/autotest_common.sh@973 -- # kill 2272709 00:36:53.805 Received shutdown signal, test time was about 1.000000 seconds 00:36:53.805 00:36:53.805 Latency(us) 00:36:53.805 [2024-11-29T12:20:53.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.805 [2024-11-29T12:20:53.625Z] =================================================================================================================== 00:36:53.805 [2024-11-29T12:20:53.625Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:53.805 13:20:53 keyring_file -- common/autotest_common.sh@978 -- # wait 2272709 00:36:54.064 13:20:53 keyring_file -- keyring/file.sh@21 -- # killprocess 2271149 00:36:54.064 13:20:53 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2271149 ']' 00:36:54.064 13:20:53 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2271149 00:36:54.064 13:20:53 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:54.064 13:20:53 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:54.064 13:20:53 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2271149 00:36:54.064 13:20:53 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:54.064 13:20:53 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:54.064 13:20:53 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2271149' 00:36:54.064 killing process with pid 2271149 00:36:54.064 13:20:53 keyring_file -- common/autotest_common.sh@973 -- # kill 2271149 00:36:54.064 13:20:53 keyring_file -- common/autotest_common.sh@978 -- # wait 2271149 00:36:54.323 00:36:54.323 real 0m11.729s 00:36:54.323 user 0m29.109s 00:36:54.323 sys 0m2.641s 00:36:54.323 13:20:54 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:54.323 13:20:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:54.323 ************************************ 00:36:54.323 END TEST keyring_file 00:36:54.323 ************************************ 00:36:54.323 13:20:54 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:54.323 13:20:54 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:54.323 13:20:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:54.323 13:20:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:54.323 13:20:54 -- common/autotest_common.sh@10 -- # set +x 00:36:54.323 ************************************ 00:36:54.323 START TEST keyring_linux 00:36:54.323 ************************************ 00:36:54.323 13:20:54 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:54.323 Joined session keyring: 485531610 00:36:54.583 * Looking for test storage... 00:36:54.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:54.583 13:20:54 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:54.583 13:20:54 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:36:54.583 13:20:54 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:54.583 13:20:54 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:54.583 13:20:54 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:54.583 13:20:54 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:54.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.583 --rc genhtml_branch_coverage=1 00:36:54.583 --rc genhtml_function_coverage=1 00:36:54.583 --rc genhtml_legend=1 00:36:54.583 --rc geninfo_all_blocks=1 00:36:54.583 --rc geninfo_unexecuted_blocks=1 00:36:54.583 00:36:54.583 ' 00:36:54.583 13:20:54 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:54.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.583 --rc genhtml_branch_coverage=1 00:36:54.583 --rc genhtml_function_coverage=1 00:36:54.583 --rc genhtml_legend=1 00:36:54.583 --rc geninfo_all_blocks=1 00:36:54.583 --rc geninfo_unexecuted_blocks=1 00:36:54.583 00:36:54.583 ' 00:36:54.583 13:20:54 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:54.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.583 --rc genhtml_branch_coverage=1 00:36:54.583 --rc genhtml_function_coverage=1 00:36:54.583 --rc genhtml_legend=1 00:36:54.583 --rc geninfo_all_blocks=1 00:36:54.583 --rc geninfo_unexecuted_blocks=1 00:36:54.583 00:36:54.583 ' 00:36:54.583 13:20:54 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:54.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.583 --rc genhtml_branch_coverage=1 00:36:54.583 --rc genhtml_function_coverage=1 00:36:54.583 --rc genhtml_legend=1 00:36:54.583 --rc geninfo_all_blocks=1 00:36:54.583 --rc geninfo_unexecuted_blocks=1 00:36:54.583 00:36:54.583 ' 00:36:54.583 13:20:54 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:54.583 13:20:54 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:54.583 13:20:54 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:54.583 13:20:54 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.583 13:20:54 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.583 13:20:54 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.583 13:20:54 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:54.583 13:20:54 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:54.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:54.583 13:20:54 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:54.583 13:20:54 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:54.583 13:20:54 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:54.583 13:20:54 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:54.583 13:20:54 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:54.583 13:20:54 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:54.583 13:20:54 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:54.583 13:20:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:54.583 13:20:54 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:54.583 13:20:54 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:54.583 13:20:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:54.583 13:20:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:54.583 13:20:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:54.583 13:20:54 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:54.583 13:20:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:54.583 13:20:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:54.583 /tmp/:spdk-test:key0 00:36:54.584 13:20:54 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:54.584 13:20:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:54.584 13:20:54 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:54.584 13:20:54 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:54.584 13:20:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:54.584 13:20:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:54.584 13:20:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:54.584 13:20:54 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:54.584 13:20:54 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:54.584 13:20:54 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:54.584 13:20:54 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:54.584 13:20:54 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:54.584 13:20:54 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:54.584 13:20:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:54.584 13:20:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:54.584 /tmp/:spdk-test:key1 00:36:54.584 13:20:54 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2273257 00:36:54.584 13:20:54 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2273257 00:36:54.584 13:20:54 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:54.584 13:20:54 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2273257 ']' 00:36:54.584 13:20:54 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:54.584 13:20:54 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:54.584 13:20:54 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:54.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:54.584 13:20:54 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:54.584 13:20:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:54.844 [2024-11-29 13:20:54.448253] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:36:54.844 [2024-11-29 13:20:54.448304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273257 ] 00:36:54.844 [2024-11-29 13:20:54.510492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.844 [2024-11-29 13:20:54.552735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.103 13:20:54 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:55.103 13:20:54 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:55.103 13:20:54 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:55.103 13:20:54 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.103 13:20:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:55.103 [2024-11-29 13:20:54.775386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:55.103 null0 00:36:55.103 [2024-11-29 13:20:54.807423] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:55.103 [2024-11-29 13:20:54.807790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:55.103 13:20:54 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.103 13:20:54 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:55.103 837794540 00:36:55.103 13:20:54 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:55.103 432143985 00:36:55.103 13:20:54 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2273271 00:36:55.103 13:20:54 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2273271 /var/tmp/bperf.sock 00:36:55.103 13:20:54 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:55.103 13:20:54 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2273271 ']' 00:36:55.103 13:20:54 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:55.103 13:20:54 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:55.103 13:20:54 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:55.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:55.103 13:20:54 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:55.103 13:20:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:55.103 [2024-11-29 13:20:54.881333] Starting SPDK v25.01-pre git sha1 0b658ecad / DPDK 24.03.0 initialization... 00:36:55.103 [2024-11-29 13:20:54.881374] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273271 ] 00:36:55.361 [2024-11-29 13:20:54.943611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.361 [2024-11-29 13:20:54.984567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:55.361 13:20:55 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:55.361 13:20:55 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:55.361 13:20:55 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:55.361 13:20:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:55.619 13:20:55 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:55.619 13:20:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:55.878 13:20:55 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:55.878 13:20:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:56.136 [2024-11-29 13:20:55.698840] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:56.136 nvme0n1 00:36:56.136 13:20:55 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:56.136 13:20:55 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:56.136 13:20:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:56.136 13:20:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:56.136 13:20:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:56.136 13:20:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.506 13:20:55 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:56.506 13:20:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:56.506 13:20:55 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:56.506 13:20:55 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:56.506 13:20:55 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.506 13:20:55 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:56.506 13:20:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.506 13:20:56 keyring_linux -- keyring/linux.sh@25 -- # sn=837794540 00:36:56.506 13:20:56 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:56.506 13:20:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:56.506 13:20:56 keyring_linux -- keyring/linux.sh@26 -- # [[ 837794540 == \8\3\7\7\9\4\5\4\0 ]] 00:36:56.506 13:20:56 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 837794540 00:36:56.506 13:20:56 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:56.506 13:20:56 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:56.506 Running I/O for 1 seconds... 00:36:57.512 19022.00 IOPS, 74.30 MiB/s 00:36:57.512 Latency(us) 00:36:57.512 [2024-11-29T12:20:57.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.512 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:57.512 nvme0n1 : 1.01 19019.68 74.30 0.00 0.00 6704.55 5556.31 13449.13 00:36:57.512 [2024-11-29T12:20:57.332Z] =================================================================================================================== 00:36:57.512 [2024-11-29T12:20:57.332Z] Total : 19019.68 74.30 0.00 0.00 6704.55 5556.31 13449.13 00:36:57.512 { 00:36:57.512 "results": [ 00:36:57.512 { 00:36:57.512 "job": "nvme0n1", 00:36:57.512 "core_mask": "0x2", 00:36:57.512 "workload": "randread", 00:36:57.512 "status": "finished", 00:36:57.512 "queue_depth": 128, 00:36:57.512 "io_size": 4096, 00:36:57.512 "runtime": 1.006852, 00:36:57.512 "iops": 19019.67717201734, 00:36:57.512 "mibps": 74.29561395319273, 00:36:57.512 "io_failed": 0, 00:36:57.512 "io_timeout": 0, 00:36:57.512 "avg_latency_us": 6704.553758020206, 00:36:57.512 "min_latency_us": 5556.313043478261, 00:36:57.512 "max_latency_us": 13449.126956521739 00:36:57.512 } 00:36:57.512 ], 00:36:57.512 "core_count": 1 00:36:57.512 } 00:36:57.512 13:20:57 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:57.512 13:20:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:57.769 13:20:57 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:57.769 13:20:57 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:57.769 13:20:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:57.770 13:20:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:57.770 13:20:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:57.770 13:20:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:58.027 13:20:57 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:58.027 13:20:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:58.027 13:20:57 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:58.027 13:20:57 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:58.027 13:20:57 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:58.027 13:20:57 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:58.027 13:20:57 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:58.027 13:20:57 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:58.027 13:20:57 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:58.027 13:20:57 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:58.027 13:20:57 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:58.027 13:20:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:58.285 [2024-11-29 13:20:57.886591] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:58.285 [2024-11-29 13:20:57.887258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8fa0 (107): Transport endpoint is not connected 00:36:58.285 [2024-11-29 13:20:57.888254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8fa0 (9): Bad file descriptor 00:36:58.285 [2024-11-29 13:20:57.889255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:58.285 [2024-11-29 13:20:57.889264] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:58.285 [2024-11-29 13:20:57.889271] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:58.285 [2024-11-29 13:20:57.889279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:58.285 request: 00:36:58.285 { 00:36:58.285 "name": "nvme0", 00:36:58.285 "trtype": "tcp", 00:36:58.285 "traddr": "127.0.0.1", 00:36:58.285 "adrfam": "ipv4", 00:36:58.285 "trsvcid": "4420", 00:36:58.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:58.285 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:58.285 "prchk_reftag": false, 00:36:58.285 "prchk_guard": false, 00:36:58.285 "hdgst": false, 00:36:58.285 "ddgst": false, 00:36:58.285 "psk": ":spdk-test:key1", 00:36:58.285 "allow_unrecognized_csi": false, 00:36:58.285 "method": "bdev_nvme_attach_controller", 00:36:58.285 "req_id": 1 00:36:58.285 } 00:36:58.285 Got JSON-RPC error response 00:36:58.285 response: 00:36:58.285 { 00:36:58.285 "code": -5, 00:36:58.285 "message": "Input/output error" 00:36:58.285 } 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@33 -- # sn=837794540 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 837794540 00:36:58.285 1 links removed 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@33 -- # sn=432143985 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 432143985 00:36:58.285 1 links removed 00:36:58.285 13:20:57 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2273271 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2273271 ']' 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2273271 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273271 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273271' 00:36:58.285 killing process with pid 2273271 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@973 -- # kill 2273271 00:36:58.285 Received shutdown signal, test time was about 1.000000 seconds 00:36:58.285 00:36:58.285 Latency(us) 00:36:58.285 [2024-11-29T12:20:58.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.285 [2024-11-29T12:20:58.105Z] =================================================================================================================== 00:36:58.285 [2024-11-29T12:20:58.105Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:58.285 13:20:57 keyring_linux -- common/autotest_common.sh@978 -- # wait 2273271 00:36:58.543 13:20:58 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2273257 00:36:58.543 13:20:58 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2273257 ']' 00:36:58.543 13:20:58 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2273257 00:36:58.543 13:20:58 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:58.543 13:20:58 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:58.543 13:20:58 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273257 00:36:58.543 13:20:58 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:58.543 13:20:58 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:58.543 13:20:58 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273257' 00:36:58.543 killing process with pid 2273257 00:36:58.543 13:20:58 keyring_linux -- common/autotest_common.sh@973 -- # kill 2273257 00:36:58.543 13:20:58 keyring_linux -- common/autotest_common.sh@978 -- # wait 2273257 00:36:58.800 00:36:58.800 real 0m4.371s 00:36:58.800 user 0m8.211s 00:36:58.800 sys 0m1.443s 00:36:58.800 13:20:58 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:58.800 13:20:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:58.800 ************************************ 00:36:58.800 END TEST keyring_linux 00:36:58.800 ************************************ 00:36:58.800 13:20:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:58.800 13:20:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:58.800 13:20:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:58.800 13:20:58 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:58.800 13:20:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:58.800 13:20:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:58.800 13:20:58 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:58.800 13:20:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:58.800 13:20:58 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:58.800 13:20:58 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:58.800 13:20:58 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:58.800 13:20:58 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:58.800 13:20:58 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:58.800 13:20:58 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:58.800 13:20:58 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:58.800 13:20:58 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:58.800 13:20:58 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:58.800 13:20:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:58.800 13:20:58 -- common/autotest_common.sh@10 -- # set +x 00:36:58.800 13:20:58 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:58.800 13:20:58 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:58.800 13:20:58 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:58.800 13:20:58 -- common/autotest_common.sh@10 -- # set +x 00:37:04.061 INFO: APP EXITING 00:37:04.062 INFO: killing all VMs 00:37:04.062 INFO: killing vhost app 00:37:04.062 INFO: EXIT DONE 00:37:05.963 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:37:05.963 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:37:05.963 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:37:08.497 Cleaning 00:37:08.497 Removing: /var/run/dpdk/spdk0/config 00:37:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:08.497 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:08.497 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:08.497 Removing: /var/run/dpdk/spdk1/config 00:37:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:08.497 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:08.497 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:08.497 Removing: /var/run/dpdk/spdk2/config 00:37:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:08.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:08.756 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:08.756 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:08.756 Removing: /var/run/dpdk/spdk3/config 00:37:08.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:08.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:08.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:08.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:08.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:08.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:08.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:08.756 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:08.756 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:08.756 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:08.756 Removing: /var/run/dpdk/spdk4/config 00:37:08.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:08.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:08.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:08.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:08.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:08.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:08.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:08.756 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:08.756 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:08.756 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:08.756 Removing: /dev/shm/bdev_svc_trace.1 00:37:08.756 Removing: /dev/shm/nvmf_trace.0 00:37:08.756 Removing: /dev/shm/spdk_tgt_trace.pid1800674 00:37:08.756 Removing: /var/run/dpdk/spdk0 00:37:08.756 Removing: /var/run/dpdk/spdk1 00:37:08.756 Removing: /var/run/dpdk/spdk2 00:37:08.756 Removing: /var/run/dpdk/spdk3 00:37:08.756 Removing: /var/run/dpdk/spdk4 00:37:08.756 Removing: /var/run/dpdk/spdk_pid1798520 00:37:08.756 Removing: /var/run/dpdk/spdk_pid1799597 00:37:08.756 Removing: /var/run/dpdk/spdk_pid1800674 00:37:08.756 Removing: /var/run/dpdk/spdk_pid1801268 00:37:08.756 Removing: /var/run/dpdk/spdk_pid1802219 00:37:08.756 Removing: /var/run/dpdk/spdk_pid1802281 00:37:08.756 Removing: /var/run/dpdk/spdk_pid1803256 00:37:08.756 Removing: /var/run/dpdk/spdk_pid1803479 00:37:08.756 Removing: /var/run/dpdk/spdk_pid1803726 00:37:08.756 Removing: /var/run/dpdk/spdk_pid1805342 00:37:08.756 Removing: /var/run/dpdk/spdk_pid1806626 00:37:08.756 Removing: /var/run/dpdk/spdk_pid1806920 00:37:08.756 Removing: /var/run/dpdk/spdk_pid1807207 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1807511 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1807801 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1808051 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1808268 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1808571 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1809262 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1812562 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1813035 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1813163 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1813375 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1813652 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1813841 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1814151 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1814313 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1814633 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1814641 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1814900 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1814909 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1815474 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1815720 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1816019 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1819723 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1823986 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1834072 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1834765 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1838979 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1839284 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1843335 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1849163 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1851822 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1862404 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1871448 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1873080 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1874008 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1890660 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1894715 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1939741 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1944920 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1950683 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1957040 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1957131 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1957869 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1958785 00:37:08.757 Removing: /var/run/dpdk/spdk_pid1959773 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1960302 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1960422 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1960721 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1960770 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1960773 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1962078 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1962993 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1963873 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1964380 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1964382 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1964619 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1965769 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1966765 00:37:09.016 Removing: /var/run/dpdk/spdk_pid1974755 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2003622 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2008135 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2009742 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2011570 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2011604 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2011827 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2012014 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2012459 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2014180 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2015072 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2015456 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2017765 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2018141 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2018777 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2023090 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2028375 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2028376 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2028377 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2032041 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2040684 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2044811 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2050810 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2052101 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2053421 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2054737 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2059210 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2063437 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2067340 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2074714 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2074723 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2079208 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2079432 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2079668 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2080121 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2080136 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2084612 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2085180 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2089525 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2092566 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2097965 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2103296 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2112089 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2119083 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2119085 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2137889 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2138448 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2139282 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2139948 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2140523 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2141216 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2141687 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2142166 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2146408 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2146641 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2152643 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2152796 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2158237 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2162365 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2171990 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2172656 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2176882 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2177154 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2181181 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2187350 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2189929 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2199857 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2208526 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2210137 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2211041 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2226935 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2230746 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2233555 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2241450 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2241462 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2246286 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2248237 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2250201 00:37:09.016 Removing: /var/run/dpdk/spdk_pid2251457 00:37:09.274 Removing: /var/run/dpdk/spdk_pid2253437 00:37:09.274 Removing: /var/run/dpdk/spdk_pid2254503 00:37:09.274 Removing: /var/run/dpdk/spdk_pid2263147 00:37:09.274 Removing: /var/run/dpdk/spdk_pid2263709 00:37:09.274 Removing: /var/run/dpdk/spdk_pid2264171 00:37:09.274 Removing: /var/run/dpdk/spdk_pid2266429 00:37:09.274 Removing: /var/run/dpdk/spdk_pid2266895 00:37:09.274 Removing: /var/run/dpdk/spdk_pid2267362 00:37:09.274 Removing: /var/run/dpdk/spdk_pid2271149 00:37:09.274 Removing: /var/run/dpdk/spdk_pid2271188 00:37:09.274 Removing: /var/run/dpdk/spdk_pid2272709 00:37:09.274 Removing: /var/run/dpdk/spdk_pid2273257 00:37:09.274 Removing: /var/run/dpdk/spdk_pid2273271 00:37:09.274 Clean 00:37:09.274 13:21:08 -- common/autotest_common.sh@1453 -- # return 0 00:37:09.274 13:21:08 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:09.274 13:21:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:09.274 13:21:08 -- common/autotest_common.sh@10 -- # set +x 00:37:09.274 13:21:08 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:09.274 13:21:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:09.274 13:21:08 -- common/autotest_common.sh@10 -- # set +x 00:37:09.274 13:21:09 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:09.274 13:21:09 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:09.274 13:21:09 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:09.274 13:21:09 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:09.274 13:21:09 -- spdk/autotest.sh@398 -- # hostname 00:37:09.274 13:21:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:09.532 geninfo: WARNING: invalid characters removed from testname! 00:37:31.460 13:21:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:32.837 13:21:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:34.741 13:21:34 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:36.644 13:21:36 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:38.551 13:21:38 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:40.478 13:21:40 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:42.385 13:21:42 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:42.385 13:21:42 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:42.385 13:21:42 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:42.385 13:21:42 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:42.385 13:21:42 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:42.385 13:21:42 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:42.385 + [[ -n 1721319 ]] 00:37:42.385 + sudo kill 1721319 00:37:42.653 [Pipeline] } 00:37:42.668 [Pipeline] // stage 00:37:42.673 [Pipeline] } 00:37:42.687 [Pipeline] // timeout 00:37:42.692 [Pipeline] } 00:37:42.706 [Pipeline] // catchError 00:37:42.711 [Pipeline] } 00:37:42.726 [Pipeline] // wrap 00:37:42.732 [Pipeline] } 00:37:42.745 [Pipeline] // catchError 00:37:42.754 [Pipeline] stage 00:37:42.756 [Pipeline] { (Epilogue) 00:37:42.770 [Pipeline] catchError 00:37:42.772 [Pipeline] { 00:37:42.784 [Pipeline] echo 00:37:42.786 Cleanup processes 00:37:42.792 [Pipeline] sh 00:37:43.076 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:43.076 2284127 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:43.090 [Pipeline] sh 00:37:43.374 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:43.374 ++ grep -v 'sudo pgrep' 00:37:43.374 ++ awk '{print $1}' 00:37:43.374 + sudo kill -9 00:37:43.374 + true 00:37:43.386 [Pipeline] sh 00:37:43.669 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:55.879 [Pipeline] sh 00:37:56.160 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:56.160 Artifacts sizes are good 00:37:56.174 [Pipeline] archiveArtifacts 00:37:56.182 Archiving artifacts 00:37:56.303 [Pipeline] sh 00:37:56.587 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:56.601 [Pipeline] cleanWs 00:37:56.611 [WS-CLEANUP] Deleting project workspace... 00:37:56.611 [WS-CLEANUP] Deferred wipeout is used... 00:37:56.618 [WS-CLEANUP] done 00:37:56.620 [Pipeline] } 00:37:56.637 [Pipeline] // catchError 00:37:56.650 [Pipeline] sh 00:37:56.929 + logger -p user.info -t JENKINS-CI 00:37:56.937 [Pipeline] } 00:37:56.950 [Pipeline] // stage 00:37:56.956 [Pipeline] } 00:37:56.970 [Pipeline] // node 00:37:56.975 [Pipeline] End of Pipeline 00:37:57.010 Finished: SUCCESS